00:00:00.001 Started by upstream project "autotest-per-patch" build number 132057 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.030 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.031 The recommended git tool is: git 00:00:00.031 using credential 00000000-0000-0000-0000-000000000002 00:00:00.034 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.053 Fetching changes from the remote Git repository 00:00:00.055 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.086 Using shallow fetch with depth 1 00:00:00.086 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.086 > git --version # timeout=10 00:00:00.121 > git --version # 'git version 2.39.2' 00:00:00.121 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.169 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.169 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.887 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.901 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.914 Checking out Revision 71582ff3be096f9d5ed302be37c05572278bd285 (FETCH_HEAD) 00:00:02.914 > git config core.sparsecheckout # timeout=10 00:00:02.926 > git read-tree -mu HEAD # timeout=10 00:00:02.944 > git checkout -f 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=5 00:00:02.963 Commit message: "jenkins/jjb-config: Add SPDK_TEST_NVME_INTERRUPT to nvme-phy job" 00:00:02.964 > git rev-list --no-walk 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=10 00:00:03.083 [Pipeline] Start of Pipeline 00:00:03.098 [Pipeline] library 00:00:03.099 Loading library shm_lib@master 00:00:03.099 Library shm_lib@master is cached. Copying from home. 00:00:03.115 [Pipeline] node 00:00:03.133 Running on WFP39 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:03.135 [Pipeline] { 00:00:03.146 [Pipeline] catchError 00:00:03.148 [Pipeline] { 00:00:03.162 [Pipeline] wrap 00:00:03.172 [Pipeline] { 00:00:03.181 [Pipeline] stage 00:00:03.183 [Pipeline] { (Prologue) 00:00:03.404 [Pipeline] sh 00:00:03.691 + logger -p user.info -t JENKINS-CI 00:00:03.709 [Pipeline] echo 00:00:03.711 Node: WFP39 00:00:03.718 [Pipeline] sh 00:00:04.024 [Pipeline] setCustomBuildProperty 00:00:04.033 [Pipeline] echo 00:00:04.034 Cleanup processes 00:00:04.038 [Pipeline] sh 00:00:04.322 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.322 2732796 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.333 [Pipeline] sh 00:00:04.615 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.615 ++ awk '{print $1}' 00:00:04.615 ++ grep -v 'sudo pgrep' 00:00:04.615 + sudo kill -9 00:00:04.615 + true 00:00:04.630 [Pipeline] cleanWs 00:00:04.639 [WS-CLEANUP] Deleting project workspace... 00:00:04.639 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.646 [WS-CLEANUP] done 00:00:04.650 [Pipeline] setCustomBuildProperty 00:00:04.663 [Pipeline] sh 00:00:04.945 + sudo git config --global --replace-all safe.directory '*' 00:00:05.026 [Pipeline] httpRequest 00:00:05.506 [Pipeline] echo 00:00:05.507 Sorcerer 10.211.164.101 is alive 00:00:05.514 [Pipeline] retry 00:00:05.516 [Pipeline] { 00:00:05.526 [Pipeline] httpRequest 00:00:05.531 HttpMethod: GET 00:00:05.532 URL: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:05.532 Sending request to url: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:05.534 Response Code: HTTP/1.1 200 OK 00:00:05.534 Success: Status code 200 is in the accepted range: 200,404 00:00:05.535 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:06.189 [Pipeline] } 00:00:06.200 [Pipeline] // retry 00:00:06.205 [Pipeline] sh 00:00:06.486 + tar --no-same-owner -xf jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:06.501 [Pipeline] httpRequest 00:00:06.849 [Pipeline] echo 00:00:06.851 Sorcerer 10.211.164.101 is alive 00:00:06.856 [Pipeline] retry 00:00:06.857 [Pipeline] { 00:00:06.869 [Pipeline] httpRequest 00:00:06.874 HttpMethod: GET 00:00:06.874 URL: http://10.211.164.101/packages/spdk_2f35f359924931896e78df4a02e6a4f6b55d370f.tar.gz 00:00:06.875 Sending request to url: http://10.211.164.101/packages/spdk_2f35f359924931896e78df4a02e6a4f6b55d370f.tar.gz 00:00:06.891 Response Code: HTTP/1.1 200 OK 00:00:06.892 Success: Status code 200 is in the accepted range: 200,404 00:00:06.892 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_2f35f359924931896e78df4a02e6a4f6b55d370f.tar.gz 00:01:07.646 [Pipeline] } 00:01:07.665 [Pipeline] // retry 00:01:07.673 [Pipeline] sh 00:01:07.961 + tar --no-same-owner -xf spdk_2f35f359924931896e78df4a02e6a4f6b55d370f.tar.gz 00:01:10.556 [Pipeline] sh 00:01:10.843 + git -C spdk log --oneline -n5 00:01:10.843 2f35f3599 bdev/nvme: Add spdk_bdev_nvme_get_each_spdk_nvme_ctrlr function 00:01:10.843 f220d590c nvmf: rename passthrough_nsid -> passthru_nsid 00:01:10.843 1a1586409 nvmf: use bdev's nsid for admin command passthru 00:01:10.843 892c29f49 nvmf: pass nsid to nvmf_ctrlr_identify_ns() 00:01:10.843 fb6c49f2f bdev: add spdk_bdev_get_nvme_nsid() 00:01:10.858 [Pipeline] } 00:01:10.874 [Pipeline] // stage 00:01:10.885 [Pipeline] stage 00:01:10.888 [Pipeline] { (Prepare) 00:01:10.906 [Pipeline] writeFile 00:01:10.922 [Pipeline] sh 00:01:11.209 + logger -p user.info -t JENKINS-CI 00:01:11.222 [Pipeline] sh 00:01:11.509 + logger -p user.info -t JENKINS-CI 00:01:11.522 [Pipeline] sh 00:01:11.807 + cat autorun-spdk.conf 00:01:11.807 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.807 SPDK_TEST_FUZZER_SHORT=1 00:01:11.807 SPDK_TEST_FUZZER=1 00:01:11.807 SPDK_TEST_SETUP=1 00:01:11.807 SPDK_RUN_UBSAN=1 00:01:11.816 RUN_NIGHTLY=0 00:01:11.820 [Pipeline] readFile 00:01:11.845 [Pipeline] withEnv 00:01:11.847 [Pipeline] { 00:01:11.859 [Pipeline] sh 00:01:12.147 + set -ex 00:01:12.147 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:12.147 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:12.147 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.147 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:12.147 ++ SPDK_TEST_FUZZER=1 00:01:12.147 ++ SPDK_TEST_SETUP=1 00:01:12.147 ++ SPDK_RUN_UBSAN=1 00:01:12.147 ++ RUN_NIGHTLY=0 00:01:12.147 + case $SPDK_TEST_NVMF_NICS in 00:01:12.147 + DRIVERS= 00:01:12.147 + [[ -n '' ]] 00:01:12.147 + exit 0 00:01:12.156 [Pipeline] } 00:01:12.170 [Pipeline] // withEnv 00:01:12.175 [Pipeline] } 00:01:12.188 [Pipeline] // stage 00:01:12.196 [Pipeline] catchError 00:01:12.198 [Pipeline] { 00:01:12.212 [Pipeline] timeout 00:01:12.212 Timeout set to expire in 30 min 00:01:12.214 [Pipeline] { 00:01:12.228 [Pipeline] stage 00:01:12.230 [Pipeline] { (Tests) 00:01:12.243 [Pipeline] sh 00:01:12.530 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:12.530 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:12.530 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:12.530 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:12.530 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:12.530 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:12.530 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:12.530 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:12.530 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:12.530 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:12.530 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:12.530 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:12.530 + source /etc/os-release 00:01:12.530 ++ NAME='Fedora Linux' 00:01:12.530 ++ VERSION='39 (Cloud Edition)' 00:01:12.530 ++ ID=fedora 00:01:12.530 ++ VERSION_ID=39 00:01:12.530 ++ VERSION_CODENAME= 00:01:12.530 ++ PLATFORM_ID=platform:f39 00:01:12.530 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:12.530 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:12.530 ++ LOGO=fedora-logo-icon 00:01:12.530 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:12.530 ++ HOME_URL=https://fedoraproject.org/ 00:01:12.530 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:12.530 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:12.530 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:12.530 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:12.530 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:12.530 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:12.530 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:12.530 ++ SUPPORT_END=2024-11-12 00:01:12.530 ++ VARIANT='Cloud Edition' 00:01:12.530 ++ VARIANT_ID=cloud 00:01:12.530 + uname -a 00:01:12.530 Linux spdk-wfp-39 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:12.530 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:15.826 Hugepages 00:01:15.826 node hugesize free / total 00:01:15.826 node0 1048576kB 0 / 0 00:01:15.826 node0 2048kB 0 / 0 00:01:15.826 node1 1048576kB 0 / 0 00:01:15.826 node1 2048kB 0 / 0 00:01:15.826 00:01:15.826 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:15.826 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:15.826 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:15.826 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:15.826 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:15.826 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:15.826 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:15.826 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:15.826 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:16.086 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:16.086 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:16.086 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:16.086 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:16.086 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:16.086 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:16.086 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:16.086 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:16.086 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:16.086 + rm -f /tmp/spdk-ld-path 00:01:16.086 + source autorun-spdk.conf 00:01:16.086 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.086 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:16.086 ++ SPDK_TEST_FUZZER=1 00:01:16.086 ++ SPDK_TEST_SETUP=1 00:01:16.086 ++ SPDK_RUN_UBSAN=1 00:01:16.086 ++ RUN_NIGHTLY=0 00:01:16.086 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:16.086 + [[ -n '' ]] 00:01:16.086 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:16.086 + for M in /var/spdk/build-*-manifest.txt 00:01:16.086 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:16.086 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:16.086 + for M in /var/spdk/build-*-manifest.txt 00:01:16.086 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:16.086 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:16.086 + for M in /var/spdk/build-*-manifest.txt 00:01:16.086 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:16.086 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:16.086 ++ uname 00:01:16.086 + [[ Linux == \L\i\n\u\x ]] 00:01:16.086 + sudo dmesg -T 00:01:16.346 + sudo dmesg --clear 00:01:16.346 + dmesg_pid=2733837 00:01:16.346 + sudo dmesg -Tw 00:01:16.346 + [[ Fedora Linux == FreeBSD ]] 00:01:16.346 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.346 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.346 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:16.346 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:16.346 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:16.346 + [[ -x /usr/src/fio-static/fio ]] 00:01:16.346 + export FIO_BIN=/usr/src/fio-static/fio 00:01:16.346 + FIO_BIN=/usr/src/fio-static/fio 00:01:16.346 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:16.346 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:16.346 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:16.346 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.346 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.346 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:16.346 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.346 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.346 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:16.346 10:28:42 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:16.346 10:28:42 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:16.346 10:28:42 -- short-fuzz-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.346 10:28:42 -- short-fuzz-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_FUZZER_SHORT=1 00:01:16.346 10:28:42 -- short-fuzz-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_FUZZER=1 00:01:16.346 10:28:42 -- short-fuzz-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_SETUP=1 00:01:16.346 10:28:42 -- short-fuzz-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:01:16.346 10:28:42 -- short-fuzz-phy-autotest/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:16.346 10:28:42 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:16.346 10:28:42 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:16.346 10:28:42 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:16.346 10:28:42 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:16.346 10:28:42 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:16.346 10:28:42 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:16.346 10:28:42 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:16.346 10:28:42 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:16.346 10:28:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.346 10:28:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.346 10:28:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.346 10:28:42 -- paths/export.sh@5 -- $ export PATH 00:01:16.346 10:28:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.346 10:28:42 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:16.346 10:28:42 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:16.346 10:28:42 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730798922.XXXXXX 00:01:16.346 10:28:42 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730798922.68WLcx 00:01:16.347 10:28:42 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:16.347 10:28:42 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:16.347 10:28:42 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:16.347 10:28:42 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:16.347 10:28:42 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:16.347 10:28:42 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:16.347 10:28:42 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:16.347 10:28:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.347 10:28:42 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:16.347 10:28:42 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:16.347 10:28:42 -- pm/common@17 -- $ local monitor 00:01:16.347 10:28:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.347 10:28:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.347 10:28:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.347 10:28:42 -- pm/common@21 -- $ date +%s 00:01:16.347 10:28:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.347 10:28:42 -- pm/common@21 -- $ date +%s 00:01:16.347 10:28:42 -- pm/common@25 -- $ sleep 1 00:01:16.347 10:28:42 -- pm/common@21 -- $ date +%s 00:01:16.347 10:28:42 -- pm/common@21 -- $ date +%s 00:01:16.347 10:28:42 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730798922 00:01:16.347 10:28:42 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730798922 00:01:16.347 10:28:42 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730798922 00:01:16.347 10:28:42 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730798922 00:01:16.606 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730798922_collect-cpu-load.pm.log 00:01:16.606 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730798922_collect-vmstat.pm.log 00:01:16.606 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730798922_collect-bmc-pm.bmc.pm.log 00:01:16.606 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730798922_collect-cpu-temp.pm.log 00:01:17.546 10:28:43 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:17.546 10:28:43 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:17.546 10:28:43 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:17.546 10:28:43 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:17.546 10:28:43 -- spdk/autobuild.sh@16 -- $ date -u 00:01:17.546 Tue Nov 5 09:28:43 AM UTC 2024 00:01:17.546 10:28:43 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:17.546 v25.01-pre-159-g2f35f3599 00:01:17.546 10:28:43 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:17.546 10:28:43 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:17.546 10:28:43 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:17.546 10:28:43 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:17.546 10:28:43 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:17.546 10:28:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.546 ************************************ 00:01:17.546 START TEST ubsan 00:01:17.546 ************************************ 00:01:17.546 10:28:43 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:17.546 using ubsan 00:01:17.546 00:01:17.546 real 0m0.001s 00:01:17.546 user 0m0.001s 00:01:17.546 sys 0m0.000s 00:01:17.546 10:28:43 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:17.546 10:28:43 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:17.546 ************************************ 00:01:17.546 END TEST ubsan 00:01:17.546 ************************************ 00:01:17.546 10:28:43 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:17.546 10:28:43 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:17.546 10:28:43 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:17.546 10:28:43 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:17.546 10:28:43 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:17.546 10:28:43 -- common/autobuild_common.sh@438 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:17.546 10:28:43 -- common/autotest_common.sh@1103 -- $ '[' 2 -le 1 ']' 00:01:17.546 10:28:43 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:17.546 10:28:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.546 ************************************ 00:01:17.546 START TEST autobuild_llvm_precompile 00:01:17.546 ************************************ 00:01:17.546 10:28:43 autobuild_llvm_precompile -- common/autotest_common.sh@1127 -- $ _llvm_precompile 00:01:17.546 10:28:43 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:01:17.546 10:28:43 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:01:17.546 Target: x86_64-redhat-linux-gnu 00:01:17.546 Thread model: posix 00:01:17.546 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:17.546 10:28:43 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:01:17.546 10:28:43 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:01:17.546 10:28:43 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:01:17.546 10:28:43 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:01:17.546 10:28:43 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:01:17.546 10:28:43 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:17.546 10:28:43 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:17.546 10:28:43 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:01:17.546 10:28:43 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:01:17.546 10:28:43 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:17.806 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:17.806 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:18.376 Using 'verbs' RDMA provider 00:01:34.212 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:46.424 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:46.992 Creating mk/config.mk...done. 00:01:46.992 Creating mk/cc.flags.mk...done. 00:01:46.992 Type 'make' to build. 00:01:46.992 00:01:46.992 real 0m29.241s 00:01:46.992 user 0m13.841s 00:01:46.992 sys 0m14.447s 00:01:46.992 10:29:12 autobuild_llvm_precompile -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:46.992 10:29:12 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:46.992 ************************************ 00:01:46.992 END TEST autobuild_llvm_precompile 00:01:46.992 ************************************ 00:01:46.992 10:29:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:46.992 10:29:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:46.992 10:29:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:46.992 10:29:12 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:46.992 10:29:12 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:47.250 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:47.250 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:47.510 Using 'verbs' RDMA provider 00:02:01.104 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:13.321 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:13.321 Creating mk/config.mk...done. 00:02:13.321 Creating mk/cc.flags.mk...done. 00:02:13.321 Type 'make' to build. 00:02:13.321 10:29:39 -- spdk/autobuild.sh@70 -- $ run_test make make -j72 00:02:13.321 10:29:39 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:13.321 10:29:39 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:13.321 10:29:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.321 ************************************ 00:02:13.321 START TEST make 00:02:13.321 ************************************ 00:02:13.321 10:29:39 make -- common/autotest_common.sh@1127 -- $ make -j72 00:02:13.887 make[1]: Nothing to be done for 'all'. 00:02:15.803 The Meson build system 00:02:15.803 Version: 1.5.0 00:02:15.803 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:02:15.803 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:15.803 Build type: native build 00:02:15.803 Project name: libvfio-user 00:02:15.803 Project version: 0.0.1 00:02:15.803 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:15.803 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:15.803 Host machine cpu family: x86_64 00:02:15.803 Host machine cpu: x86_64 00:02:15.803 Run-time dependency threads found: YES 00:02:15.803 Library dl found: YES 00:02:15.803 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:15.803 Run-time dependency json-c found: YES 0.17 00:02:15.804 Run-time dependency cmocka found: YES 1.1.7 00:02:15.804 Program pytest-3 found: NO 00:02:15.804 Program flake8 found: NO 00:02:15.804 Program misspell-fixer found: NO 00:02:15.804 Program restructuredtext-lint found: NO 00:02:15.804 Program valgrind found: YES (/usr/bin/valgrind) 00:02:15.804 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:15.804 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:15.804 Compiler for C supports arguments -Wwrite-strings: YES 00:02:15.804 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:15.804 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:15.804 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:15.804 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:15.804 Build targets in project: 8 00:02:15.804 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:15.804 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:15.804 00:02:15.804 libvfio-user 0.0.1 00:02:15.804 00:02:15.804 User defined options 00:02:15.804 buildtype : debug 00:02:15.804 default_library: static 00:02:15.804 libdir : /usr/local/lib 00:02:15.804 00:02:15.804 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:16.063 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:16.321 [1/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:02:16.321 [2/36] Compiling C object samples/lspci.p/lspci.c.o 00:02:16.321 [3/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:02:16.321 [4/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:02:16.321 [5/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:16.321 [6/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:02:16.321 [7/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:16.321 [8/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:16.321 [9/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:16.321 [10/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:16.321 [11/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:16.321 [12/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:16.321 [13/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:02:16.321 [14/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:16.321 [15/36] Compiling C object test/unit_tests.p/mocks.c.o 00:02:16.321 [16/36] Compiling C object samples/client.p/client.c.o 00:02:16.321 [17/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:02:16.321 [18/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:02:16.321 [19/36] Compiling C object samples/null.p/null.c.o 00:02:16.321 [20/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:16.321 [21/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:16.321 [22/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:16.321 [23/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:16.321 [24/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:16.321 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:16.321 [26/36] Compiling C object samples/server.p/server.c.o 00:02:16.321 [27/36] Linking target samples/client 00:02:16.321 [28/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:02:16.321 [29/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:16.321 [30/36] Linking static target lib/libvfio-user.a 00:02:16.321 [31/36] Linking target samples/lspci 00:02:16.321 [32/36] Linking target samples/server 00:02:16.321 [33/36] Linking target test/unit_tests 00:02:16.321 [34/36] Linking target samples/null 00:02:16.321 [35/36] Linking target samples/shadow_ioeventfd_server 00:02:16.321 [36/36] Linking target samples/gpio-pci-idio-16 00:02:16.580 INFO: autodetecting backend as ninja 00:02:16.580 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:16.580 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:17.159 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:17.159 ninja: no work to do. 00:02:22.434 The Meson build system 00:02:22.434 Version: 1.5.0 00:02:22.434 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:22.434 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:22.434 Build type: native build 00:02:22.434 Program cat found: YES (/usr/bin/cat) 00:02:22.434 Project name: DPDK 00:02:22.434 Project version: 24.03.0 00:02:22.434 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:22.434 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:22.434 Host machine cpu family: x86_64 00:02:22.434 Host machine cpu: x86_64 00:02:22.434 Message: ## Building in Developer Mode ## 00:02:22.434 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:22.434 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:22.434 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:22.434 Program python3 found: YES (/usr/bin/python3) 00:02:22.434 Program cat found: YES (/usr/bin/cat) 00:02:22.434 Compiler for C supports arguments -march=native: YES 00:02:22.434 Checking for size of "void *" : 8 00:02:22.434 Checking for size of "void *" : 8 (cached) 00:02:22.434 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:22.434 Library m found: YES 00:02:22.434 Library numa found: YES 00:02:22.434 Has header "numaif.h" : YES 00:02:22.434 Library fdt found: NO 00:02:22.434 Library execinfo found: NO 00:02:22.434 Has header "execinfo.h" : YES 00:02:22.434 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:22.434 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:22.434 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:22.434 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:22.434 Run-time dependency openssl found: YES 3.1.1 00:02:22.434 Run-time dependency libpcap found: YES 1.10.4 00:02:22.434 Has header "pcap.h" with dependency libpcap: YES 00:02:22.434 Compiler for C supports arguments -Wcast-qual: YES 00:02:22.434 Compiler for C supports arguments -Wdeprecated: YES 00:02:22.434 Compiler for C supports arguments -Wformat: YES 00:02:22.434 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:22.434 Compiler for C supports arguments -Wformat-security: YES 00:02:22.434 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:22.434 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:22.434 Compiler for C supports arguments -Wnested-externs: YES 00:02:22.434 Compiler for C supports arguments -Wold-style-definition: YES 00:02:22.434 Compiler for C supports arguments -Wpointer-arith: YES 00:02:22.434 Compiler for C supports arguments -Wsign-compare: YES 00:02:22.434 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:22.434 Compiler for C supports arguments -Wundef: YES 00:02:22.434 Compiler for C supports arguments -Wwrite-strings: YES 00:02:22.435 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:22.435 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:22.435 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:22.435 Program objdump found: YES (/usr/bin/objdump) 00:02:22.435 Compiler for C supports arguments -mavx512f: YES 00:02:22.435 Checking if "AVX512 checking" compiles: YES 00:02:22.435 Fetching value of define "__SSE4_2__" : 1 00:02:22.435 Fetching value of define "__AES__" : 1 00:02:22.435 Fetching value of define "__AVX__" : 1 00:02:22.435 Fetching value of define "__AVX2__" : 1 00:02:22.435 Fetching value of define "__AVX512BW__" : 1 00:02:22.435 Fetching value of define "__AVX512CD__" : 1 00:02:22.435 Fetching value of define "__AVX512DQ__" : 1 00:02:22.435 Fetching value of define "__AVX512F__" : 1 00:02:22.435 Fetching value of define "__AVX512VL__" : 1 00:02:22.435 Fetching value of define "__PCLMUL__" : 1 00:02:22.435 Fetching value of define "__RDRND__" : 1 00:02:22.435 Fetching value of define "__RDSEED__" : 1 00:02:22.435 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:22.435 Fetching value of define "__znver1__" : (undefined) 00:02:22.435 Fetching value of define "__znver2__" : (undefined) 00:02:22.435 Fetching value of define "__znver3__" : (undefined) 00:02:22.435 Fetching value of define "__znver4__" : (undefined) 00:02:22.435 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:22.435 Message: lib/log: Defining dependency "log" 00:02:22.435 Message: lib/kvargs: Defining dependency "kvargs" 00:02:22.435 Message: lib/telemetry: Defining dependency "telemetry" 00:02:22.435 Checking for function "getentropy" : NO 00:02:22.435 Message: lib/eal: Defining dependency "eal" 00:02:22.435 Message: lib/ring: Defining dependency "ring" 00:02:22.435 Message: lib/rcu: Defining dependency "rcu" 00:02:22.435 Message: lib/mempool: Defining dependency "mempool" 00:02:22.435 Message: lib/mbuf: Defining dependency "mbuf" 00:02:22.435 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:22.435 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:22.435 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:22.435 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:22.435 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:22.435 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:22.435 Compiler for C supports arguments -mpclmul: YES 00:02:22.435 Compiler for C supports arguments -maes: YES 00:02:22.435 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:22.435 Compiler for C supports arguments -mavx512bw: YES 00:02:22.435 Compiler for C supports arguments -mavx512dq: YES 00:02:22.435 Compiler for C supports arguments -mavx512vl: YES 00:02:22.435 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:22.435 Compiler for C supports arguments -mavx2: YES 00:02:22.435 Compiler for C supports arguments -mavx: YES 00:02:22.435 Message: lib/net: Defining dependency "net" 00:02:22.435 Message: lib/meter: Defining dependency "meter" 00:02:22.435 Message: lib/ethdev: Defining dependency "ethdev" 00:02:22.435 Message: lib/pci: Defining dependency "pci" 00:02:22.435 Message: lib/cmdline: Defining dependency "cmdline" 00:02:22.435 Message: lib/hash: Defining dependency "hash" 00:02:22.435 Message: lib/timer: Defining dependency "timer" 00:02:22.435 Message: lib/compressdev: Defining dependency "compressdev" 00:02:22.435 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:22.435 Message: lib/dmadev: Defining dependency "dmadev" 00:02:22.435 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:22.435 Message: lib/power: Defining dependency "power" 00:02:22.435 Message: lib/reorder: Defining dependency "reorder" 00:02:22.435 Message: lib/security: Defining dependency "security" 00:02:22.435 Has header "linux/userfaultfd.h" : YES 00:02:22.435 Has header "linux/vduse.h" : YES 00:02:22.435 Message: lib/vhost: Defining dependency "vhost" 00:02:22.435 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:22.435 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:22.435 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:22.435 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:22.435 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:22.435 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:22.435 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:22.435 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:22.435 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:22.435 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:22.435 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:22.435 Configuring doxy-api-html.conf using configuration 00:02:22.435 Configuring doxy-api-man.conf using configuration 00:02:22.435 Program mandb found: YES (/usr/bin/mandb) 00:02:22.435 Program sphinx-build found: NO 00:02:22.435 Configuring rte_build_config.h using configuration 00:02:22.435 Message: 00:02:22.435 ================= 00:02:22.435 Applications Enabled 00:02:22.435 ================= 00:02:22.435 00:02:22.435 apps: 00:02:22.435 00:02:22.435 00:02:22.435 Message: 00:02:22.435 ================= 00:02:22.435 Libraries Enabled 00:02:22.435 ================= 00:02:22.435 00:02:22.435 libs: 00:02:22.435 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:22.435 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:22.435 cryptodev, dmadev, power, reorder, security, vhost, 00:02:22.435 00:02:22.435 Message: 00:02:22.435 =============== 00:02:22.435 Drivers Enabled 00:02:22.435 =============== 00:02:22.435 00:02:22.435 common: 00:02:22.435 00:02:22.435 bus: 00:02:22.435 pci, vdev, 00:02:22.435 mempool: 00:02:22.435 ring, 00:02:22.435 dma: 00:02:22.435 00:02:22.435 net: 00:02:22.435 00:02:22.435 crypto: 00:02:22.435 00:02:22.435 compress: 00:02:22.435 00:02:22.435 vdpa: 00:02:22.435 00:02:22.435 00:02:22.435 Message: 00:02:22.435 ================= 00:02:22.435 Content Skipped 00:02:22.435 ================= 00:02:22.435 00:02:22.435 apps: 00:02:22.435 dumpcap: explicitly disabled via build config 00:02:22.435 graph: explicitly disabled via build config 00:02:22.435 pdump: explicitly disabled via build config 00:02:22.435 proc-info: explicitly disabled via build config 00:02:22.435 test-acl: explicitly disabled via build config 00:02:22.435 test-bbdev: explicitly disabled via build config 00:02:22.435 test-cmdline: explicitly disabled via build config 00:02:22.435 test-compress-perf: explicitly disabled via build config 00:02:22.435 test-crypto-perf: explicitly disabled via build config 00:02:22.435 test-dma-perf: explicitly disabled via build config 00:02:22.435 test-eventdev: explicitly disabled via build config 00:02:22.435 test-fib: explicitly disabled via build config 00:02:22.435 test-flow-perf: explicitly disabled via build config 00:02:22.435 test-gpudev: explicitly disabled via build config 00:02:22.435 test-mldev: explicitly disabled via build config 00:02:22.435 test-pipeline: explicitly disabled via build config 00:02:22.435 test-pmd: explicitly disabled via build config 00:02:22.435 test-regex: explicitly disabled via build config 00:02:22.435 test-sad: explicitly disabled via build config 00:02:22.435 test-security-perf: explicitly disabled via build config 00:02:22.435 00:02:22.435 libs: 00:02:22.435 argparse: explicitly disabled via build config 00:02:22.435 metrics: explicitly disabled via build config 00:02:22.435 acl: explicitly disabled via build config 00:02:22.435 bbdev: explicitly disabled via build config 00:02:22.435 bitratestats: explicitly disabled via build config 00:02:22.435 bpf: explicitly disabled via build config 00:02:22.435 cfgfile: explicitly disabled via build config 00:02:22.435 distributor: explicitly disabled via build config 00:02:22.435 efd: explicitly disabled via build config 00:02:22.435 eventdev: explicitly disabled via build config 00:02:22.435 dispatcher: explicitly disabled via build config 00:02:22.435 gpudev: explicitly disabled via build config 00:02:22.435 gro: explicitly disabled via build config 00:02:22.435 gso: explicitly disabled via build config 00:02:22.435 ip_frag: explicitly disabled via build config 00:02:22.435 jobstats: explicitly disabled via build config 00:02:22.435 latencystats: explicitly disabled via build config 00:02:22.435 lpm: explicitly disabled via build config 00:02:22.435 member: explicitly disabled via build config 00:02:22.435 pcapng: explicitly disabled via build config 00:02:22.435 rawdev: explicitly disabled via build config 00:02:22.435 regexdev: explicitly disabled via build config 00:02:22.435 mldev: explicitly disabled via build config 00:02:22.435 rib: explicitly disabled via build config 00:02:22.435 sched: explicitly disabled via build config 00:02:22.435 stack: explicitly disabled via build config 00:02:22.435 ipsec: explicitly disabled via build config 00:02:22.435 pdcp: explicitly disabled via build config 00:02:22.435 fib: explicitly disabled via build config 00:02:22.435 port: explicitly disabled via build config 00:02:22.435 pdump: explicitly disabled via build config 00:02:22.435 table: explicitly disabled via build config 00:02:22.435 pipeline: explicitly disabled via build config 00:02:22.435 graph: explicitly disabled via build config 00:02:22.435 node: explicitly disabled via build config 00:02:22.435 00:02:22.435 drivers: 00:02:22.435 common/cpt: not in enabled drivers build config 00:02:22.435 common/dpaax: not in enabled drivers build config 00:02:22.435 common/iavf: not in enabled drivers build config 00:02:22.435 common/idpf: not in enabled drivers build config 00:02:22.435 common/ionic: not in enabled drivers build config 00:02:22.435 common/mvep: not in enabled drivers build config 00:02:22.435 common/octeontx: not in enabled drivers build config 00:02:22.435 bus/auxiliary: not in enabled drivers build config 00:02:22.435 bus/cdx: not in enabled drivers build config 00:02:22.435 bus/dpaa: not in enabled drivers build config 00:02:22.435 bus/fslmc: not in enabled drivers build config 00:02:22.435 bus/ifpga: not in enabled drivers build config 00:02:22.435 bus/platform: not in enabled drivers build config 00:02:22.435 bus/uacce: not in enabled drivers build config 00:02:22.435 bus/vmbus: not in enabled drivers build config 00:02:22.435 common/cnxk: not in enabled drivers build config 00:02:22.435 common/mlx5: not in enabled drivers build config 00:02:22.435 common/nfp: not in enabled drivers build config 00:02:22.435 common/nitrox: not in enabled drivers build config 00:02:22.436 common/qat: not in enabled drivers build config 00:02:22.436 common/sfc_efx: not in enabled drivers build config 00:02:22.436 mempool/bucket: not in enabled drivers build config 00:02:22.436 mempool/cnxk: not in enabled drivers build config 00:02:22.436 mempool/dpaa: not in enabled drivers build config 00:02:22.436 mempool/dpaa2: not in enabled drivers build config 00:02:22.436 mempool/octeontx: not in enabled drivers build config 00:02:22.436 mempool/stack: not in enabled drivers build config 00:02:22.436 dma/cnxk: not in enabled drivers build config 00:02:22.436 dma/dpaa: not in enabled drivers build config 00:02:22.436 dma/dpaa2: not in enabled drivers build config 00:02:22.436 dma/hisilicon: not in enabled drivers build config 00:02:22.436 dma/idxd: not in enabled drivers build config 00:02:22.436 dma/ioat: not in enabled drivers build config 00:02:22.436 dma/skeleton: not in enabled drivers build config 00:02:22.436 net/af_packet: not in enabled drivers build config 00:02:22.436 net/af_xdp: not in enabled drivers build config 00:02:22.436 net/ark: not in enabled drivers build config 00:02:22.436 net/atlantic: not in enabled drivers build config 00:02:22.436 net/avp: not in enabled drivers build config 00:02:22.436 net/axgbe: not in enabled drivers build config 00:02:22.436 net/bnx2x: not in enabled drivers build config 00:02:22.436 net/bnxt: not in enabled drivers build config 00:02:22.436 net/bonding: not in enabled drivers build config 00:02:22.436 net/cnxk: not in enabled drivers build config 00:02:22.436 net/cpfl: not in enabled drivers build config 00:02:22.436 net/cxgbe: not in enabled drivers build config 00:02:22.436 net/dpaa: not in enabled drivers build config 00:02:22.436 net/dpaa2: not in enabled drivers build config 00:02:22.436 net/e1000: not in enabled drivers build config 00:02:22.436 net/ena: not in enabled drivers build config 00:02:22.436 net/enetc: not in enabled drivers build config 00:02:22.436 net/enetfec: not in enabled drivers build config 00:02:22.436 net/enic: not in enabled drivers build config 00:02:22.436 net/failsafe: not in enabled drivers build config 00:02:22.436 net/fm10k: not in enabled drivers build config 00:02:22.436 net/gve: not in enabled drivers build config 00:02:22.436 net/hinic: not in enabled drivers build config 00:02:22.436 net/hns3: not in enabled drivers build config 00:02:22.436 net/i40e: not in enabled drivers build config 00:02:22.436 net/iavf: not in enabled drivers build config 00:02:22.436 net/ice: not in enabled drivers build config 00:02:22.436 net/idpf: not in enabled drivers build config 00:02:22.436 net/igc: not in enabled drivers build config 00:02:22.436 net/ionic: not in enabled drivers build config 00:02:22.436 net/ipn3ke: not in enabled drivers build config 00:02:22.436 net/ixgbe: not in enabled drivers build config 00:02:22.436 net/mana: not in enabled drivers build config 00:02:22.436 net/memif: not in enabled drivers build config 00:02:22.436 net/mlx4: not in enabled drivers build config 00:02:22.436 net/mlx5: not in enabled drivers build config 00:02:22.436 net/mvneta: not in enabled drivers build config 00:02:22.436 net/mvpp2: not in enabled drivers build config 00:02:22.436 net/netvsc: not in enabled drivers build config 00:02:22.436 net/nfb: not in enabled drivers build config 00:02:22.436 net/nfp: not in enabled drivers build config 00:02:22.436 net/ngbe: not in enabled drivers build config 00:02:22.436 net/null: not in enabled drivers build config 00:02:22.436 net/octeontx: not in enabled drivers build config 00:02:22.436 net/octeon_ep: not in enabled drivers build config 00:02:22.436 net/pcap: not in enabled drivers build config 00:02:22.436 net/pfe: not in enabled drivers build config 00:02:22.436 net/qede: not in enabled drivers build config 00:02:22.436 net/ring: not in enabled drivers build config 00:02:22.436 net/sfc: not in enabled drivers build config 00:02:22.436 net/softnic: not in enabled drivers build config 00:02:22.436 net/tap: not in enabled drivers build config 00:02:22.436 net/thunderx: not in enabled drivers build config 00:02:22.436 net/txgbe: not in enabled drivers build config 00:02:22.436 net/vdev_netvsc: not in enabled drivers build config 00:02:22.436 net/vhost: not in enabled drivers build config 00:02:22.436 net/virtio: not in enabled drivers build config 00:02:22.436 net/vmxnet3: not in enabled drivers build config 00:02:22.436 raw/*: missing internal dependency, "rawdev" 00:02:22.436 crypto/armv8: not in enabled drivers build config 00:02:22.436 crypto/bcmfs: not in enabled drivers build config 00:02:22.436 crypto/caam_jr: not in enabled drivers build config 00:02:22.436 crypto/ccp: not in enabled drivers build config 00:02:22.436 crypto/cnxk: not in enabled drivers build config 00:02:22.436 crypto/dpaa_sec: not in enabled drivers build config 00:02:22.436 crypto/dpaa2_sec: not in enabled drivers build config 00:02:22.436 crypto/ipsec_mb: not in enabled drivers build config 00:02:22.436 crypto/mlx5: not in enabled drivers build config 00:02:22.436 crypto/mvsam: not in enabled drivers build config 00:02:22.436 crypto/nitrox: not in enabled drivers build config 00:02:22.436 crypto/null: not in enabled drivers build config 00:02:22.436 crypto/octeontx: not in enabled drivers build config 00:02:22.436 crypto/openssl: not in enabled drivers build config 00:02:22.436 crypto/scheduler: not in enabled drivers build config 00:02:22.436 crypto/uadk: not in enabled drivers build config 00:02:22.436 crypto/virtio: not in enabled drivers build config 00:02:22.436 compress/isal: not in enabled drivers build config 00:02:22.436 compress/mlx5: not in enabled drivers build config 00:02:22.436 compress/nitrox: not in enabled drivers build config 00:02:22.436 compress/octeontx: not in enabled drivers build config 00:02:22.436 compress/zlib: not in enabled drivers build config 00:02:22.436 regex/*: missing internal dependency, "regexdev" 00:02:22.436 ml/*: missing internal dependency, "mldev" 00:02:22.436 vdpa/ifc: not in enabled drivers build config 00:02:22.436 vdpa/mlx5: not in enabled drivers build config 00:02:22.436 vdpa/nfp: not in enabled drivers build config 00:02:22.436 vdpa/sfc: not in enabled drivers build config 00:02:22.436 event/*: missing internal dependency, "eventdev" 00:02:22.436 baseband/*: missing internal dependency, "bbdev" 00:02:22.436 gpu/*: missing internal dependency, "gpudev" 00:02:22.436 00:02:22.436 00:02:22.436 Build targets in project: 85 00:02:22.436 00:02:22.436 DPDK 24.03.0 00:02:22.436 00:02:22.436 User defined options 00:02:22.436 buildtype : debug 00:02:22.436 default_library : static 00:02:22.436 libdir : lib 00:02:22.436 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:22.436 c_args : -fPIC -Werror 00:02:22.436 c_link_args : 00:02:22.436 cpu_instruction_set: native 00:02:22.436 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:22.436 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:22.436 enable_docs : false 00:02:22.436 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:22.436 enable_kmods : false 00:02:22.436 max_lcores : 128 00:02:22.436 tests : false 00:02:22.436 00:02:22.436 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:23.008 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:23.275 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:23.275 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:23.275 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:23.275 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:23.275 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:23.275 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:23.275 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:23.275 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:23.275 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:23.275 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:23.275 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:23.275 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:23.275 [13/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:23.275 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:23.275 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:23.275 [16/268] Linking static target lib/librte_kvargs.a 00:02:23.275 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:23.275 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:23.275 [19/268] Linking static target lib/librte_log.a 00:02:23.534 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:23.800 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:23.800 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:23.800 [23/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:23.800 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:23.800 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:23.800 [26/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:23.800 [27/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:23.800 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:23.800 [29/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:23.800 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:23.800 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:23.800 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:23.800 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:23.800 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:23.800 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:23.800 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:23.800 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:23.800 [38/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.800 [39/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:23.800 [40/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:23.800 [41/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:23.800 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:23.800 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:23.800 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:23.800 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:23.800 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:23.800 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:23.800 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:23.800 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:23.800 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:23.800 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:23.800 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:23.800 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:23.800 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:23.800 [55/268] Linking static target lib/librte_ring.a 00:02:23.800 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:23.800 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:23.800 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:23.800 [59/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:23.800 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:23.800 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:23.800 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:23.800 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:23.800 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:23.800 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:23.800 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:23.800 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:24.060 [68/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:24.060 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:24.060 [70/268] Linking static target lib/librte_telemetry.a 00:02:24.060 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:24.060 [72/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:24.060 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:24.060 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:24.060 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:24.060 [76/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:24.060 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:24.060 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:24.060 [79/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:24.060 [80/268] Linking static target lib/librte_pci.a 00:02:24.060 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:24.060 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:24.060 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:24.060 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:24.060 [85/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:24.060 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:24.060 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:24.061 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:24.061 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:24.061 [90/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:24.061 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:24.061 [92/268] Linking static target lib/librte_rcu.a 00:02:24.061 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:24.061 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:24.061 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:24.061 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:24.061 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:24.061 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:24.061 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:24.061 [100/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:24.061 [101/268] Linking static target lib/librte_mempool.a 00:02:24.061 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:24.061 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:24.061 [104/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:24.061 [105/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:24.061 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:24.061 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:24.061 [108/268] Linking static target lib/librte_eal.a 00:02:24.061 [109/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:24.320 [110/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:24.320 [111/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:24.320 [112/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:24.320 [113/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:24.320 [114/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:24.320 [115/268] Linking static target lib/librte_mbuf.a 00:02:24.320 [116/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:24.320 [117/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:24.320 [118/268] Linking static target lib/librte_net.a 00:02:24.320 [119/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.579 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:24.579 [121/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.579 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:24.579 [123/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:24.579 [124/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.579 [125/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:24.579 [126/268] Linking static target lib/librte_meter.a 00:02:24.579 [127/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:24.579 [128/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.579 [129/268] Linking target lib/librte_log.so.24.1 00:02:24.579 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:24.579 [131/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:24.579 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:24.579 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:24.579 [134/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:24.579 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:24.579 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:24.579 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:24.579 [138/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:24.579 [139/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:24.579 [140/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:24.579 [141/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:24.839 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:24.839 [143/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:24.839 [144/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.839 [145/268] Linking static target lib/librte_timer.a 00:02:24.839 [146/268] Linking static target lib/librte_cmdline.a 00:02:24.839 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:24.839 [148/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:24.839 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:24.839 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:24.839 [151/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:24.839 [152/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:24.839 [153/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:24.839 [154/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:24.839 [155/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:24.839 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:24.839 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:24.839 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:24.839 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:24.839 [160/268] Linking static target lib/librte_power.a 00:02:24.839 [161/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:24.839 [162/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:24.839 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:24.839 [164/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.839 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:24.839 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:24.839 [167/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.839 [168/268] Linking static target lib/librte_hash.a 00:02:24.839 [169/268] Linking target lib/librte_kvargs.so.24.1 00:02:24.839 [170/268] Linking static target lib/librte_dmadev.a 00:02:24.839 [171/268] Linking target lib/librte_telemetry.so.24.1 00:02:24.839 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:24.839 [173/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:24.839 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:24.839 [175/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:24.839 [176/268] Linking static target lib/librte_compressdev.a 00:02:24.839 [177/268] Linking static target lib/librte_security.a 00:02:24.839 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:24.839 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:24.839 [180/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:24.839 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:24.839 [182/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:24.839 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:24.839 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:24.839 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:24.839 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:24.839 [187/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:24.839 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:24.839 [189/268] Linking static target lib/librte_reorder.a 00:02:25.098 [190/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:25.098 [191/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:25.098 [192/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:25.098 [193/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:25.098 [194/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:25.098 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:25.098 [196/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:25.098 [197/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:25.098 [198/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.098 [199/268] Linking static target drivers/librte_bus_vdev.a 00:02:25.098 [200/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:25.098 [201/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:25.098 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:25.098 [203/268] Linking static target lib/librte_cryptodev.a 00:02:25.098 [204/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.098 [205/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:25.098 [206/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:25.098 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:25.098 [208/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.098 [209/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.098 [210/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.098 [211/268] Linking static target drivers/librte_mempool_ring.a 00:02:25.357 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:25.357 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:25.357 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:25.357 [215/268] Linking static target drivers/librte_bus_pci.a 00:02:25.357 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:25.357 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:25.357 [218/268] Linking static target lib/librte_ethdev.a 00:02:25.357 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.357 [220/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.615 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.615 [222/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.615 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.615 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.872 [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.129 [226/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.129 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.129 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:26.386 [229/268] Linking static target lib/librte_vhost.a 00:02:27.318 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.252 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.803 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.175 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.175 [234/268] Linking target lib/librte_eal.so.24.1 00:02:36.433 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:36.433 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:36.433 [237/268] Linking target lib/librte_ring.so.24.1 00:02:36.433 [238/268] Linking target lib/librte_meter.so.24.1 00:02:36.433 [239/268] Linking target lib/librte_timer.so.24.1 00:02:36.433 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:36.433 [241/268] Linking target lib/librte_pci.so.24.1 00:02:36.433 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:36.433 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:36.433 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:36.690 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:36.691 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:36.691 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:36.691 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:36.691 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:36.691 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:36.691 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:36.948 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:36.948 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:36.948 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:37.207 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:37.207 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:37.207 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:37.207 [258/268] Linking target lib/librte_net.so.24.1 00:02:37.207 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:37.207 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:37.519 [261/268] Linking target lib/librte_security.so.24.1 00:02:37.519 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:37.519 [263/268] Linking target lib/librte_hash.so.24.1 00:02:37.519 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:37.519 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:37.519 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:37.798 [267/268] Linking target lib/librte_power.so.24.1 00:02:37.798 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:37.798 INFO: autodetecting backend as ninja 00:02:37.798 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:02:38.791 CC lib/log/log.o 00:02:38.791 CC lib/log/log_flags.o 00:02:38.791 CC lib/log/log_deprecated.o 00:02:38.791 CC lib/ut/ut.o 00:02:38.791 CC lib/ut_mock/mock.o 00:02:39.048 LIB libspdk_log.a 00:02:39.048 LIB libspdk_ut_mock.a 00:02:39.048 LIB libspdk_ut.a 00:02:39.305 CXX lib/trace_parser/trace.o 00:02:39.305 CC lib/dma/dma.o 00:02:39.305 CC lib/ioat/ioat.o 00:02:39.305 CC lib/util/base64.o 00:02:39.305 CC lib/util/bit_array.o 00:02:39.305 CC lib/util/cpuset.o 00:02:39.305 CC lib/util/crc16.o 00:02:39.305 CC lib/util/crc32.o 00:02:39.305 CC lib/util/crc32c.o 00:02:39.305 CC lib/util/crc32_ieee.o 00:02:39.305 CC lib/util/crc64.o 00:02:39.305 CC lib/util/dif.o 00:02:39.305 CC lib/util/fd.o 00:02:39.305 CC lib/util/file.o 00:02:39.305 CC lib/util/fd_group.o 00:02:39.305 CC lib/util/hexlify.o 00:02:39.305 CC lib/util/iov.o 00:02:39.305 CC lib/util/math.o 00:02:39.305 CC lib/util/strerror_tls.o 00:02:39.305 CC lib/util/net.o 00:02:39.305 CC lib/util/pipe.o 00:02:39.305 CC lib/util/string.o 00:02:39.305 CC lib/util/uuid.o 00:02:39.305 CC lib/util/xor.o 00:02:39.305 CC lib/util/zipf.o 00:02:39.305 CC lib/util/md5.o 00:02:39.305 CC lib/vfio_user/host/vfio_user_pci.o 00:02:39.305 CC lib/vfio_user/host/vfio_user.o 00:02:39.561 LIB libspdk_dma.a 00:02:39.561 LIB libspdk_ioat.a 00:02:39.561 LIB libspdk_vfio_user.a 00:02:39.818 LIB libspdk_util.a 00:02:40.075 CC lib/env_dpdk/memory.o 00:02:40.075 CC lib/json/json_parse.o 00:02:40.075 CC lib/json/json_util.o 00:02:40.075 CC lib/env_dpdk/env.o 00:02:40.075 CC lib/json/json_write.o 00:02:40.075 CC lib/env_dpdk/pci.o 00:02:40.075 CC lib/env_dpdk/init.o 00:02:40.075 CC lib/env_dpdk/threads.o 00:02:40.075 CC lib/env_dpdk/pci_ioat.o 00:02:40.075 CC lib/env_dpdk/pci_idxd.o 00:02:40.075 CC lib/env_dpdk/pci_virtio.o 00:02:40.075 CC lib/env_dpdk/pci_vmd.o 00:02:40.075 CC lib/env_dpdk/pci_event.o 00:02:40.075 CC lib/env_dpdk/sigbus_handler.o 00:02:40.075 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:40.075 CC lib/env_dpdk/pci_dpdk.o 00:02:40.075 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:40.075 CC lib/rdma_utils/rdma_utils.o 00:02:40.075 CC lib/conf/conf.o 00:02:40.075 CC lib/vmd/vmd.o 00:02:40.075 CC lib/vmd/led.o 00:02:40.075 CC lib/rdma_provider/common.o 00:02:40.075 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:40.075 CC lib/idxd/idxd_user.o 00:02:40.075 CC lib/idxd/idxd.o 00:02:40.075 CC lib/idxd/idxd_kernel.o 00:02:40.075 LIB libspdk_trace_parser.a 00:02:40.332 LIB libspdk_rdma_provider.a 00:02:40.332 LIB libspdk_conf.a 00:02:40.332 LIB libspdk_json.a 00:02:40.332 LIB libspdk_rdma_utils.a 00:02:40.589 LIB libspdk_vmd.a 00:02:40.589 LIB libspdk_idxd.a 00:02:40.589 CC lib/jsonrpc/jsonrpc_server.o 00:02:40.589 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:40.589 CC lib/jsonrpc/jsonrpc_client.o 00:02:40.589 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:40.848 LIB libspdk_jsonrpc.a 00:02:41.107 LIB libspdk_env_dpdk.a 00:02:41.107 CC lib/rpc/rpc.o 00:02:41.365 LIB libspdk_rpc.a 00:02:41.624 CC lib/notify/notify.o 00:02:41.624 CC lib/notify/notify_rpc.o 00:02:41.624 CC lib/keyring/keyring.o 00:02:41.624 CC lib/keyring/keyring_rpc.o 00:02:41.624 CC lib/trace/trace.o 00:02:41.624 CC lib/trace/trace_rpc.o 00:02:41.624 CC lib/trace/trace_flags.o 00:02:41.624 LIB libspdk_notify.a 00:02:41.881 LIB libspdk_trace.a 00:02:41.881 LIB libspdk_keyring.a 00:02:42.141 CC lib/sock/sock.o 00:02:42.141 CC lib/sock/sock_rpc.o 00:02:42.141 CC lib/thread/thread.o 00:02:42.141 CC lib/thread/iobuf.o 00:02:42.400 LIB libspdk_sock.a 00:02:42.658 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:42.658 CC lib/nvme/nvme_ns_cmd.o 00:02:42.658 CC lib/nvme/nvme_ctrlr.o 00:02:42.658 CC lib/nvme/nvme_fabric.o 00:02:42.658 CC lib/nvme/nvme_ns.o 00:02:42.658 CC lib/nvme/nvme_pcie_common.o 00:02:42.658 CC lib/nvme/nvme_pcie.o 00:02:42.658 CC lib/nvme/nvme_qpair.o 00:02:42.658 CC lib/nvme/nvme_quirks.o 00:02:42.658 CC lib/nvme/nvme_transport.o 00:02:42.658 CC lib/nvme/nvme.o 00:02:42.658 CC lib/nvme/nvme_discovery.o 00:02:42.658 CC lib/nvme/nvme_tcp.o 00:02:42.658 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:42.658 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:42.658 CC lib/nvme/nvme_opal.o 00:02:42.658 CC lib/nvme/nvme_io_msg.o 00:02:42.658 CC lib/nvme/nvme_poll_group.o 00:02:42.658 CC lib/nvme/nvme_zns.o 00:02:42.658 CC lib/nvme/nvme_stubs.o 00:02:42.658 CC lib/nvme/nvme_vfio_user.o 00:02:42.658 CC lib/nvme/nvme_auth.o 00:02:42.658 CC lib/nvme/nvme_cuse.o 00:02:42.658 CC lib/nvme/nvme_rdma.o 00:02:43.225 LIB libspdk_thread.a 00:02:43.484 CC lib/blob/zeroes.o 00:02:43.484 CC lib/virtio/virtio_vhost_user.o 00:02:43.484 CC lib/blob/blobstore.o 00:02:43.484 CC lib/blob/request.o 00:02:43.484 CC lib/virtio/virtio.o 00:02:43.484 CC lib/virtio/virtio_pci.o 00:02:43.484 CC lib/blob/blob_bs_dev.o 00:02:43.484 CC lib/virtio/virtio_vfio_user.o 00:02:43.484 CC lib/fsdev/fsdev_rpc.o 00:02:43.484 CC lib/fsdev/fsdev.o 00:02:43.484 CC lib/fsdev/fsdev_io.o 00:02:43.484 CC lib/accel/accel_rpc.o 00:02:43.484 CC lib/accel/accel.o 00:02:43.484 CC lib/accel/accel_sw.o 00:02:43.484 CC lib/vfu_tgt/tgt_endpoint.o 00:02:43.484 CC lib/vfu_tgt/tgt_rpc.o 00:02:43.484 CC lib/init/subsystem_rpc.o 00:02:43.484 CC lib/init/json_config.o 00:02:43.484 CC lib/init/subsystem.o 00:02:43.484 CC lib/init/rpc.o 00:02:43.742 LIB libspdk_init.a 00:02:43.742 LIB libspdk_vfu_tgt.a 00:02:43.742 LIB libspdk_virtio.a 00:02:44.001 LIB libspdk_fsdev.a 00:02:44.001 CC lib/event/app.o 00:02:44.001 CC lib/event/reactor.o 00:02:44.001 CC lib/event/log_rpc.o 00:02:44.001 CC lib/event/app_rpc.o 00:02:44.001 CC lib/event/scheduler_static.o 00:02:44.259 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:44.259 LIB libspdk_event.a 00:02:44.518 LIB libspdk_nvme.a 00:02:44.518 LIB libspdk_accel.a 00:02:44.776 LIB libspdk_fuse_dispatcher.a 00:02:44.776 CC lib/bdev/bdev_zone.o 00:02:44.776 CC lib/bdev/bdev.o 00:02:44.776 CC lib/bdev/bdev_rpc.o 00:02:44.776 CC lib/bdev/part.o 00:02:44.776 CC lib/bdev/scsi_nvme.o 00:02:46.152 LIB libspdk_blob.a 00:02:46.152 CC lib/blobfs/blobfs.o 00:02:46.152 CC lib/blobfs/tree.o 00:02:46.152 CC lib/lvol/lvol.o 00:02:46.719 LIB libspdk_bdev.a 00:02:46.981 CC lib/nbd/nbd.o 00:02:46.981 CC lib/nbd/nbd_rpc.o 00:02:46.981 CC lib/scsi/lun.o 00:02:46.981 CC lib/scsi/dev.o 00:02:46.981 CC lib/scsi/scsi.o 00:02:46.981 CC lib/scsi/port.o 00:02:46.981 CC lib/scsi/task.o 00:02:46.981 CC lib/scsi/scsi_bdev.o 00:02:46.981 CC lib/scsi/scsi_pr.o 00:02:46.981 CC lib/scsi/scsi_rpc.o 00:02:46.981 CC lib/ublk/ublk.o 00:02:46.981 CC lib/ublk/ublk_rpc.o 00:02:46.981 CC lib/nvmf/ctrlr.o 00:02:46.981 CC lib/nvmf/ctrlr_bdev.o 00:02:46.981 CC lib/nvmf/subsystem.o 00:02:46.981 CC lib/nvmf/ctrlr_discovery.o 00:02:46.981 CC lib/nvmf/nvmf.o 00:02:46.981 CC lib/nvmf/transport.o 00:02:46.981 CC lib/nvmf/nvmf_rpc.o 00:02:46.981 CC lib/nvmf/tcp.o 00:02:46.981 CC lib/nvmf/stubs.o 00:02:46.981 CC lib/ftl/ftl_core.o 00:02:46.981 CC lib/nvmf/mdns_server.o 00:02:46.981 CC lib/nvmf/auth.o 00:02:46.981 CC lib/ftl/ftl_init.o 00:02:46.981 CC lib/nvmf/vfio_user.o 00:02:46.981 CC lib/ftl/ftl_layout.o 00:02:46.981 CC lib/ftl/ftl_debug.o 00:02:46.981 CC lib/ftl/ftl_l2p_flat.o 00:02:46.981 CC lib/nvmf/rdma.o 00:02:46.981 CC lib/ftl/ftl_io.o 00:02:46.981 CC lib/ftl/ftl_l2p.o 00:02:46.981 CC lib/ftl/ftl_sb.o 00:02:46.981 CC lib/ftl/ftl_band.o 00:02:46.981 CC lib/ftl/ftl_nv_cache.o 00:02:46.981 CC lib/ftl/ftl_band_ops.o 00:02:46.981 CC lib/ftl/ftl_rq.o 00:02:46.981 CC lib/ftl/ftl_writer.o 00:02:46.981 CC lib/ftl/ftl_reloc.o 00:02:46.981 CC lib/ftl/ftl_l2p_cache.o 00:02:46.981 CC lib/ftl/ftl_p2l.o 00:02:46.981 CC lib/ftl/ftl_p2l_log.o 00:02:46.982 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:46.982 CC lib/ftl/mngt/ftl_mngt.o 00:02:46.982 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:46.982 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:46.982 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:46.982 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:46.982 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:46.982 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:46.982 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:46.982 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:46.982 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:46.982 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:46.982 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:46.982 CC lib/ftl/utils/ftl_conf.o 00:02:46.982 CC lib/ftl/utils/ftl_md.o 00:02:46.982 CC lib/ftl/utils/ftl_mempool.o 00:02:46.982 CC lib/ftl/utils/ftl_bitmap.o 00:02:46.982 CC lib/ftl/utils/ftl_property.o 00:02:46.982 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:46.982 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:46.982 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:46.982 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:46.982 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:46.982 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:46.982 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:46.982 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:46.982 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:46.982 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:46.982 LIB libspdk_lvol.a 00:02:46.982 LIB libspdk_blobfs.a 00:02:46.982 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:46.982 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:46.982 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:47.240 CC lib/ftl/base/ftl_base_dev.o 00:02:47.240 CC lib/ftl/base/ftl_base_bdev.o 00:02:47.240 CC lib/ftl/ftl_trace.o 00:02:47.498 LIB libspdk_nbd.a 00:02:47.498 LIB libspdk_scsi.a 00:02:47.498 LIB libspdk_ublk.a 00:02:47.756 CC lib/vhost/vhost_rpc.o 00:02:47.756 CC lib/vhost/vhost.o 00:02:47.756 CC lib/vhost/vhost_scsi.o 00:02:47.756 CC lib/vhost/vhost_blk.o 00:02:47.756 CC lib/vhost/rte_vhost_user.o 00:02:47.756 CC lib/iscsi/conn.o 00:02:47.756 CC lib/iscsi/param.o 00:02:47.756 CC lib/iscsi/init_grp.o 00:02:47.756 CC lib/iscsi/iscsi.o 00:02:47.756 CC lib/iscsi/portal_grp.o 00:02:47.756 CC lib/iscsi/tgt_node.o 00:02:47.756 CC lib/iscsi/iscsi_subsystem.o 00:02:47.756 CC lib/iscsi/iscsi_rpc.o 00:02:47.756 CC lib/iscsi/task.o 00:02:48.014 LIB libspdk_ftl.a 00:02:48.581 LIB libspdk_nvmf.a 00:02:48.581 LIB libspdk_vhost.a 00:02:48.840 LIB libspdk_iscsi.a 00:02:49.407 CC module/env_dpdk/env_dpdk_rpc.o 00:02:49.407 CC module/vfu_device/vfu_virtio.o 00:02:49.407 CC module/vfu_device/vfu_virtio_scsi.o 00:02:49.407 CC module/vfu_device/vfu_virtio_fs.o 00:02:49.407 CC module/vfu_device/vfu_virtio_blk.o 00:02:49.407 CC module/vfu_device/vfu_virtio_rpc.o 00:02:49.407 CC module/keyring/file/keyring_rpc.o 00:02:49.407 CC module/keyring/file/keyring.o 00:02:49.407 CC module/keyring/linux/keyring.o 00:02:49.407 CC module/keyring/linux/keyring_rpc.o 00:02:49.407 LIB libspdk_env_dpdk_rpc.a 00:02:49.407 CC module/sock/posix/posix.o 00:02:49.407 CC module/accel/dsa/accel_dsa.o 00:02:49.407 CC module/accel/iaa/accel_iaa_rpc.o 00:02:49.407 CC module/accel/iaa/accel_iaa.o 00:02:49.407 CC module/accel/dsa/accel_dsa_rpc.o 00:02:49.407 CC module/accel/error/accel_error.o 00:02:49.407 CC module/accel/error/accel_error_rpc.o 00:02:49.407 CC module/fsdev/aio/fsdev_aio.o 00:02:49.407 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:49.407 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:49.407 CC module/accel/ioat/accel_ioat_rpc.o 00:02:49.407 CC module/scheduler/gscheduler/gscheduler.o 00:02:49.407 CC module/accel/ioat/accel_ioat.o 00:02:49.407 CC module/fsdev/aio/linux_aio_mgr.o 00:02:49.407 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:49.407 CC module/blob/bdev/blob_bdev.o 00:02:49.665 LIB libspdk_keyring_file.a 00:02:49.665 LIB libspdk_keyring_linux.a 00:02:49.665 LIB libspdk_scheduler_dynamic.a 00:02:49.665 LIB libspdk_scheduler_dpdk_governor.a 00:02:49.665 LIB libspdk_accel_error.a 00:02:49.665 LIB libspdk_scheduler_gscheduler.a 00:02:49.665 LIB libspdk_accel_iaa.a 00:02:49.665 LIB libspdk_accel_ioat.a 00:02:49.665 LIB libspdk_accel_dsa.a 00:02:49.665 LIB libspdk_blob_bdev.a 00:02:49.924 LIB libspdk_vfu_device.a 00:02:50.182 LIB libspdk_sock_posix.a 00:02:50.182 LIB libspdk_fsdev_aio.a 00:02:50.182 CC module/blobfs/bdev/blobfs_bdev.o 00:02:50.182 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:50.182 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:50.182 CC module/bdev/ftl/bdev_ftl.o 00:02:50.182 CC module/bdev/nvme/bdev_nvme.o 00:02:50.182 CC module/bdev/nvme/bdev_mdns_client.o 00:02:50.182 CC module/bdev/nvme/vbdev_opal.o 00:02:50.182 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:50.182 CC module/bdev/nvme/nvme_rpc.o 00:02:50.182 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:50.182 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:50.182 CC module/bdev/delay/vbdev_delay.o 00:02:50.182 CC module/bdev/error/vbdev_error.o 00:02:50.182 CC module/bdev/error/vbdev_error_rpc.o 00:02:50.182 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:50.182 CC module/bdev/raid/bdev_raid_rpc.o 00:02:50.182 CC module/bdev/raid/bdev_raid.o 00:02:50.182 CC module/bdev/raid/raid0.o 00:02:50.182 CC module/bdev/raid/concat.o 00:02:50.182 CC module/bdev/raid/bdev_raid_sb.o 00:02:50.182 CC module/bdev/raid/raid1.o 00:02:50.182 CC module/bdev/lvol/vbdev_lvol.o 00:02:50.182 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:50.182 CC module/bdev/null/bdev_null.o 00:02:50.182 CC module/bdev/null/bdev_null_rpc.o 00:02:50.182 CC module/bdev/gpt/gpt.o 00:02:50.182 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:50.182 CC module/bdev/malloc/bdev_malloc.o 00:02:50.182 CC module/bdev/split/vbdev_split_rpc.o 00:02:50.182 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:50.182 CC module/bdev/split/vbdev_split.o 00:02:50.182 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:50.182 CC module/bdev/gpt/vbdev_gpt.o 00:02:50.182 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:50.183 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:50.183 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:50.183 CC module/bdev/iscsi/bdev_iscsi.o 00:02:50.183 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:50.183 CC module/bdev/aio/bdev_aio.o 00:02:50.183 CC module/bdev/aio/bdev_aio_rpc.o 00:02:50.183 CC module/bdev/passthru/vbdev_passthru.o 00:02:50.183 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:50.441 LIB libspdk_blobfs_bdev.a 00:02:50.441 LIB libspdk_bdev_error.a 00:02:50.441 LIB libspdk_bdev_ftl.a 00:02:50.441 LIB libspdk_bdev_null.a 00:02:50.441 LIB libspdk_bdev_zone_block.a 00:02:50.441 LIB libspdk_bdev_split.a 00:02:50.441 LIB libspdk_bdev_aio.a 00:02:50.441 LIB libspdk_bdev_iscsi.a 00:02:50.441 LIB libspdk_bdev_delay.a 00:02:50.441 LIB libspdk_bdev_passthru.a 00:02:50.441 LIB libspdk_bdev_gpt.a 00:02:50.699 LIB libspdk_bdev_malloc.a 00:02:50.699 LIB libspdk_bdev_lvol.a 00:02:50.699 LIB libspdk_bdev_virtio.a 00:02:50.958 LIB libspdk_bdev_raid.a 00:02:52.337 LIB libspdk_bdev_nvme.a 00:02:52.904 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:52.904 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:52.904 CC module/event/subsystems/vmd/vmd.o 00:02:52.904 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:52.904 CC module/event/subsystems/sock/sock.o 00:02:52.904 CC module/event/subsystems/fsdev/fsdev.o 00:02:52.904 CC module/event/subsystems/scheduler/scheduler.o 00:02:52.904 CC module/event/subsystems/keyring/keyring.o 00:02:52.904 CC module/event/subsystems/iobuf/iobuf.o 00:02:52.904 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:52.904 LIB libspdk_event_scheduler.a 00:02:52.904 LIB libspdk_event_vfu_tgt.a 00:02:52.905 LIB libspdk_event_vhost_blk.a 00:02:52.905 LIB libspdk_event_keyring.a 00:02:52.905 LIB libspdk_event_vmd.a 00:02:52.905 LIB libspdk_event_fsdev.a 00:02:52.905 LIB libspdk_event_sock.a 00:02:52.905 LIB libspdk_event_iobuf.a 00:02:53.163 CC module/event/subsystems/accel/accel.o 00:02:53.422 LIB libspdk_event_accel.a 00:02:53.680 CC module/event/subsystems/bdev/bdev.o 00:02:53.937 LIB libspdk_event_bdev.a 00:02:54.195 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:54.195 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:54.195 CC module/event/subsystems/ublk/ublk.o 00:02:54.195 CC module/event/subsystems/scsi/scsi.o 00:02:54.195 CC module/event/subsystems/nbd/nbd.o 00:02:54.195 LIB libspdk_event_nbd.a 00:02:54.453 LIB libspdk_event_ublk.a 00:02:54.453 LIB libspdk_event_scsi.a 00:02:54.453 LIB libspdk_event_nvmf.a 00:02:54.711 CC module/event/subsystems/iscsi/iscsi.o 00:02:54.711 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:54.711 LIB libspdk_event_vhost_scsi.a 00:02:54.711 LIB libspdk_event_iscsi.a 00:02:55.286 CC app/spdk_nvme_identify/identify.o 00:02:55.286 CXX app/trace/trace.o 00:02:55.286 CC app/spdk_nvme_perf/perf.o 00:02:55.286 CC app/trace_record/trace_record.o 00:02:55.286 CC app/spdk_nvme_discover/discovery_aer.o 00:02:55.286 CC app/spdk_top/spdk_top.o 00:02:55.286 CC app/spdk_lspci/spdk_lspci.o 00:02:55.286 CC test/rpc_client/rpc_client_test.o 00:02:55.286 CC app/spdk_dd/spdk_dd.o 00:02:55.286 TEST_HEADER include/spdk/accel_module.h 00:02:55.286 TEST_HEADER include/spdk/accel.h 00:02:55.286 TEST_HEADER include/spdk/assert.h 00:02:55.287 TEST_HEADER include/spdk/barrier.h 00:02:55.287 TEST_HEADER include/spdk/base64.h 00:02:55.287 TEST_HEADER include/spdk/bdev_module.h 00:02:55.287 TEST_HEADER include/spdk/bdev.h 00:02:55.287 TEST_HEADER include/spdk/bit_array.h 00:02:55.287 TEST_HEADER include/spdk/bdev_zone.h 00:02:55.287 TEST_HEADER include/spdk/bit_pool.h 00:02:55.287 TEST_HEADER include/spdk/blob_bdev.h 00:02:55.287 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:55.287 TEST_HEADER include/spdk/blobfs.h 00:02:55.287 TEST_HEADER include/spdk/conf.h 00:02:55.287 TEST_HEADER include/spdk/config.h 00:02:55.287 TEST_HEADER include/spdk/blob.h 00:02:55.287 TEST_HEADER include/spdk/cpuset.h 00:02:55.287 TEST_HEADER include/spdk/crc16.h 00:02:55.287 TEST_HEADER include/spdk/crc32.h 00:02:55.287 TEST_HEADER include/spdk/crc64.h 00:02:55.287 TEST_HEADER include/spdk/dif.h 00:02:55.287 TEST_HEADER include/spdk/dma.h 00:02:55.287 TEST_HEADER include/spdk/endian.h 00:02:55.287 TEST_HEADER include/spdk/env_dpdk.h 00:02:55.287 TEST_HEADER include/spdk/env.h 00:02:55.287 TEST_HEADER include/spdk/event.h 00:02:55.287 TEST_HEADER include/spdk/fd_group.h 00:02:55.287 TEST_HEADER include/spdk/fd.h 00:02:55.287 TEST_HEADER include/spdk/file.h 00:02:55.287 TEST_HEADER include/spdk/fsdev_module.h 00:02:55.287 TEST_HEADER include/spdk/fsdev.h 00:02:55.287 TEST_HEADER include/spdk/ftl.h 00:02:55.287 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:55.287 TEST_HEADER include/spdk/gpt_spec.h 00:02:55.287 TEST_HEADER include/spdk/hexlify.h 00:02:55.287 TEST_HEADER include/spdk/histogram_data.h 00:02:55.287 TEST_HEADER include/spdk/idxd.h 00:02:55.287 TEST_HEADER include/spdk/idxd_spec.h 00:02:55.287 TEST_HEADER include/spdk/init.h 00:02:55.287 TEST_HEADER include/spdk/ioat_spec.h 00:02:55.287 TEST_HEADER include/spdk/ioat.h 00:02:55.287 TEST_HEADER include/spdk/iscsi_spec.h 00:02:55.287 TEST_HEADER include/spdk/jsonrpc.h 00:02:55.287 TEST_HEADER include/spdk/json.h 00:02:55.287 TEST_HEADER include/spdk/keyring.h 00:02:55.287 TEST_HEADER include/spdk/keyring_module.h 00:02:55.287 TEST_HEADER include/spdk/likely.h 00:02:55.287 TEST_HEADER include/spdk/lvol.h 00:02:55.287 TEST_HEADER include/spdk/log.h 00:02:55.287 TEST_HEADER include/spdk/md5.h 00:02:55.287 TEST_HEADER include/spdk/mmio.h 00:02:55.287 TEST_HEADER include/spdk/memory.h 00:02:55.287 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:55.287 TEST_HEADER include/spdk/nbd.h 00:02:55.287 TEST_HEADER include/spdk/net.h 00:02:55.287 TEST_HEADER include/spdk/notify.h 00:02:55.287 TEST_HEADER include/spdk/nvme.h 00:02:55.287 TEST_HEADER include/spdk/nvme_intel.h 00:02:55.287 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:55.287 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:55.287 TEST_HEADER include/spdk/nvme_zns.h 00:02:55.287 TEST_HEADER include/spdk/nvme_spec.h 00:02:55.287 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:55.287 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:55.287 TEST_HEADER include/spdk/nvmf.h 00:02:55.287 TEST_HEADER include/spdk/nvmf_spec.h 00:02:55.287 TEST_HEADER include/spdk/nvmf_transport.h 00:02:55.287 TEST_HEADER include/spdk/opal_spec.h 00:02:55.287 TEST_HEADER include/spdk/opal.h 00:02:55.287 TEST_HEADER include/spdk/pci_ids.h 00:02:55.287 TEST_HEADER include/spdk/pipe.h 00:02:55.287 TEST_HEADER include/spdk/queue.h 00:02:55.287 TEST_HEADER include/spdk/reduce.h 00:02:55.287 TEST_HEADER include/spdk/scheduler.h 00:02:55.287 TEST_HEADER include/spdk/rpc.h 00:02:55.287 TEST_HEADER include/spdk/scsi.h 00:02:55.287 TEST_HEADER include/spdk/scsi_spec.h 00:02:55.287 TEST_HEADER include/spdk/sock.h 00:02:55.287 TEST_HEADER include/spdk/stdinc.h 00:02:55.287 TEST_HEADER include/spdk/string.h 00:02:55.287 TEST_HEADER include/spdk/thread.h 00:02:55.287 TEST_HEADER include/spdk/trace_parser.h 00:02:55.287 TEST_HEADER include/spdk/trace.h 00:02:55.287 TEST_HEADER include/spdk/tree.h 00:02:55.287 TEST_HEADER include/spdk/ublk.h 00:02:55.287 TEST_HEADER include/spdk/util.h 00:02:55.287 TEST_HEADER include/spdk/uuid.h 00:02:55.287 CC app/fio/nvme/fio_plugin.o 00:02:55.287 TEST_HEADER include/spdk/version.h 00:02:55.287 CC app/spdk_tgt/spdk_tgt.o 00:02:55.287 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:55.287 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:55.287 TEST_HEADER include/spdk/vhost.h 00:02:55.287 TEST_HEADER include/spdk/vmd.h 00:02:55.287 TEST_HEADER include/spdk/xor.h 00:02:55.287 CXX test/cpp_headers/accel.o 00:02:55.287 TEST_HEADER include/spdk/zipf.h 00:02:55.287 CC test/app/histogram_perf/histogram_perf.o 00:02:55.287 CC test/thread/poller_perf/poller_perf.o 00:02:55.287 CC test/app/stub/stub.o 00:02:55.287 CXX test/cpp_headers/accel_module.o 00:02:55.287 CXX test/cpp_headers/assert.o 00:02:55.287 CXX test/cpp_headers/base64.o 00:02:55.287 CXX test/cpp_headers/barrier.o 00:02:55.287 CXX test/cpp_headers/bdev.o 00:02:55.287 CXX test/cpp_headers/bdev_module.o 00:02:55.287 CXX test/cpp_headers/bdev_zone.o 00:02:55.287 CXX test/cpp_headers/bit_pool.o 00:02:55.287 CXX test/cpp_headers/bit_array.o 00:02:55.287 CC test/thread/lock/spdk_lock.o 00:02:55.287 CXX test/cpp_headers/blob_bdev.o 00:02:55.287 CXX test/cpp_headers/blobfs_bdev.o 00:02:55.287 CXX test/cpp_headers/blobfs.o 00:02:55.287 CXX test/cpp_headers/blob.o 00:02:55.287 CXX test/cpp_headers/conf.o 00:02:55.287 CXX test/cpp_headers/config.o 00:02:55.287 CXX test/cpp_headers/cpuset.o 00:02:55.287 CXX test/cpp_headers/crc16.o 00:02:55.287 CXX test/cpp_headers/crc64.o 00:02:55.287 CXX test/cpp_headers/crc32.o 00:02:55.287 CXX test/cpp_headers/dif.o 00:02:55.287 CXX test/cpp_headers/endian.o 00:02:55.287 CXX test/cpp_headers/dma.o 00:02:55.287 CXX test/cpp_headers/env_dpdk.o 00:02:55.287 CXX test/cpp_headers/event.o 00:02:55.287 CXX test/cpp_headers/env.o 00:02:55.287 CC test/app/jsoncat/jsoncat.o 00:02:55.287 CXX test/cpp_headers/fd_group.o 00:02:55.287 CXX test/cpp_headers/fd.o 00:02:55.287 CC app/iscsi_tgt/iscsi_tgt.o 00:02:55.287 CXX test/cpp_headers/file.o 00:02:55.287 CXX test/cpp_headers/fsdev.o 00:02:55.287 CXX test/cpp_headers/fsdev_module.o 00:02:55.287 CXX test/cpp_headers/ftl.o 00:02:55.287 CC test/env/memory/memory_ut.o 00:02:55.287 CC app/nvmf_tgt/nvmf_main.o 00:02:55.287 CC examples/ioat/perf/perf.o 00:02:55.287 CC examples/util/zipf/zipf.o 00:02:55.287 CXX test/cpp_headers/fuse_dispatcher.o 00:02:55.287 CXX test/cpp_headers/gpt_spec.o 00:02:55.287 CXX test/cpp_headers/hexlify.o 00:02:55.287 CXX test/cpp_headers/histogram_data.o 00:02:55.287 CXX test/cpp_headers/idxd.o 00:02:55.287 CC test/env/vtophys/vtophys.o 00:02:55.287 CC examples/ioat/verify/verify.o 00:02:55.287 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:55.287 CC test/env/pci/pci_ut.o 00:02:55.287 LINK spdk_lspci 00:02:55.287 CXX test/cpp_headers/idxd_spec.o 00:02:55.287 CC app/fio/bdev/fio_plugin.o 00:02:55.287 CC test/app/bdev_svc/bdev_svc.o 00:02:55.287 CC test/dma/test_dma/test_dma.o 00:02:55.287 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:55.287 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:55.287 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:55.287 LINK spdk_nvme_discover 00:02:55.287 LINK spdk_trace_record 00:02:55.287 CC test/env/mem_callbacks/mem_callbacks.o 00:02:55.287 LINK rpc_client_test 00:02:55.547 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:55.547 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:55.547 CXX test/cpp_headers/init.o 00:02:55.547 CXX test/cpp_headers/ioat.o 00:02:55.547 LINK poller_perf 00:02:55.547 CXX test/cpp_headers/ioat_spec.o 00:02:55.547 CXX test/cpp_headers/iscsi_spec.o 00:02:55.547 LINK histogram_perf 00:02:55.547 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:55.547 CXX test/cpp_headers/json.o 00:02:55.547 LINK stub 00:02:55.547 CXX test/cpp_headers/jsonrpc.o 00:02:55.547 LINK jsoncat 00:02:55.547 CXX test/cpp_headers/keyring.o 00:02:55.547 CXX test/cpp_headers/keyring_module.o 00:02:55.547 LINK zipf 00:02:55.547 CXX test/cpp_headers/likely.o 00:02:55.547 CXX test/cpp_headers/log.o 00:02:55.547 LINK vtophys 00:02:55.547 CXX test/cpp_headers/lvol.o 00:02:55.547 CXX test/cpp_headers/md5.o 00:02:55.547 CXX test/cpp_headers/memory.o 00:02:55.547 CXX test/cpp_headers/mmio.o 00:02:55.547 CXX test/cpp_headers/nbd.o 00:02:55.547 CXX test/cpp_headers/net.o 00:02:55.547 CXX test/cpp_headers/notify.o 00:02:55.547 CXX test/cpp_headers/nvme.o 00:02:55.547 CXX test/cpp_headers/nvme_intel.o 00:02:55.547 CXX test/cpp_headers/nvme_ocssd.o 00:02:55.547 LINK interrupt_tgt 00:02:55.547 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:55.547 CXX test/cpp_headers/nvme_spec.o 00:02:55.547 CXX test/cpp_headers/nvme_zns.o 00:02:55.547 CXX test/cpp_headers/nvmf_cmd.o 00:02:55.547 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:55.547 CXX test/cpp_headers/nvmf.o 00:02:55.547 CXX test/cpp_headers/nvmf_spec.o 00:02:55.547 CXX test/cpp_headers/nvmf_transport.o 00:02:55.547 LINK env_dpdk_post_init 00:02:55.547 CXX test/cpp_headers/opal.o 00:02:55.547 CXX test/cpp_headers/opal_spec.o 00:02:55.547 CXX test/cpp_headers/pci_ids.o 00:02:55.547 CXX test/cpp_headers/pipe.o 00:02:55.547 LINK bdev_svc 00:02:55.547 CXX test/cpp_headers/queue.o 00:02:55.547 CXX test/cpp_headers/reduce.o 00:02:55.547 CXX test/cpp_headers/rpc.o 00:02:55.547 LINK verify 00:02:55.547 CXX test/cpp_headers/scheduler.o 00:02:55.547 CXX test/cpp_headers/scsi.o 00:02:55.547 LINK iscsi_tgt 00:02:55.547 LINK ioat_perf 00:02:55.547 LINK nvmf_tgt 00:02:55.547 CXX test/cpp_headers/scsi_spec.o 00:02:55.547 CXX test/cpp_headers/sock.o 00:02:55.547 LINK spdk_tgt 00:02:55.547 CXX test/cpp_headers/stdinc.o 00:02:55.806 CXX test/cpp_headers/string.o 00:02:55.806 CXX test/cpp_headers/thread.o 00:02:55.806 CXX test/cpp_headers/trace.o 00:02:55.806 LINK spdk_trace 00:02:55.806 LINK spdk_dd 00:02:55.806 CXX test/cpp_headers/trace_parser.o 00:02:55.806 CXX test/cpp_headers/tree.o 00:02:55.806 CXX test/cpp_headers/ublk.o 00:02:55.806 CXX test/cpp_headers/util.o 00:02:55.806 CXX test/cpp_headers/uuid.o 00:02:55.806 CXX test/cpp_headers/version.o 00:02:55.806 CXX test/cpp_headers/vfio_user_pci.o 00:02:55.806 CXX test/cpp_headers/vfio_user_spec.o 00:02:55.806 CXX test/cpp_headers/vhost.o 00:02:55.806 CXX test/cpp_headers/vmd.o 00:02:55.806 CXX test/cpp_headers/xor.o 00:02:55.806 CXX test/cpp_headers/zipf.o 00:02:55.806 LINK pci_ut 00:02:56.064 LINK llvm_vfio_fuzz 00:02:56.064 LINK nvme_fuzz 00:02:56.064 LINK vhost_fuzz 00:02:56.064 LINK spdk_nvme 00:02:56.064 LINK spdk_nvme_identify 00:02:56.064 LINK spdk_bdev 00:02:56.064 LINK spdk_nvme_perf 00:02:56.064 LINK test_dma 00:02:56.064 LINK mem_callbacks 00:02:56.322 LINK spdk_top 00:02:56.322 CC app/vhost/vhost.o 00:02:56.322 LINK llvm_nvme_fuzz 00:02:56.322 CC examples/idxd/perf/perf.o 00:02:56.322 CC examples/vmd/led/led.o 00:02:56.322 CC examples/vmd/lsvmd/lsvmd.o 00:02:56.322 CC examples/sock/hello_world/hello_sock.o 00:02:56.322 CC examples/thread/thread/thread_ex.o 00:02:56.581 LINK lsvmd 00:02:56.581 LINK vhost 00:02:56.581 LINK led 00:02:56.581 LINK hello_sock 00:02:56.581 LINK memory_ut 00:02:56.581 LINK idxd_perf 00:02:56.581 LINK thread 00:02:56.839 LINK spdk_lock 00:02:57.097 LINK iscsi_fuzz 00:02:57.354 CC examples/nvme/hotplug/hotplug.o 00:02:57.354 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:57.354 CC examples/nvme/reconnect/reconnect.o 00:02:57.354 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:57.354 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:57.354 CC examples/nvme/hello_world/hello_world.o 00:02:57.354 CC examples/nvme/arbitration/arbitration.o 00:02:57.354 CC examples/nvme/abort/abort.o 00:02:57.612 CC test/event/reactor_perf/reactor_perf.o 00:02:57.612 CC test/event/event_perf/event_perf.o 00:02:57.612 CC test/event/reactor/reactor.o 00:02:57.612 LINK cmb_copy 00:02:57.612 CC test/event/app_repeat/app_repeat.o 00:02:57.612 LINK pmr_persistence 00:02:57.612 CC test/event/scheduler/scheduler.o 00:02:57.612 LINK hello_world 00:02:57.612 LINK hotplug 00:02:57.612 LINK reactor 00:02:57.612 LINK reactor_perf 00:02:57.612 LINK event_perf 00:02:57.612 LINK reconnect 00:02:57.612 LINK app_repeat 00:02:57.612 LINK arbitration 00:02:57.612 LINK abort 00:02:57.908 LINK nvme_manage 00:02:57.908 LINK scheduler 00:02:58.166 CC test/nvme/sgl/sgl.o 00:02:58.166 CC test/nvme/err_injection/err_injection.o 00:02:58.166 CC test/nvme/fused_ordering/fused_ordering.o 00:02:58.166 CC test/nvme/compliance/nvme_compliance.o 00:02:58.166 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:58.166 CC test/nvme/reset/reset.o 00:02:58.166 CC test/nvme/aer/aer.o 00:02:58.166 CC test/nvme/startup/startup.o 00:02:58.166 CC test/nvme/reserve/reserve.o 00:02:58.166 CC test/nvme/overhead/overhead.o 00:02:58.166 CC test/nvme/simple_copy/simple_copy.o 00:02:58.166 CC test/nvme/cuse/cuse.o 00:02:58.166 CC test/nvme/connect_stress/connect_stress.o 00:02:58.166 CC test/nvme/e2edp/nvme_dp.o 00:02:58.166 CC test/nvme/fdp/fdp.o 00:02:58.166 CC test/nvme/boot_partition/boot_partition.o 00:02:58.166 CC test/accel/dif/dif.o 00:02:58.166 CC test/blobfs/mkfs/mkfs.o 00:02:58.166 CC test/lvol/esnap/esnap.o 00:02:58.166 LINK err_injection 00:02:58.166 LINK startup 00:02:58.166 LINK simple_copy 00:02:58.166 LINK boot_partition 00:02:58.166 LINK doorbell_aers 00:02:58.166 LINK connect_stress 00:02:58.166 LINK fused_ordering 00:02:58.166 LINK sgl 00:02:58.166 LINK reserve 00:02:58.166 LINK overhead 00:02:58.166 LINK fdp 00:02:58.424 LINK reset 00:02:58.424 LINK aer 00:02:58.424 LINK nvme_dp 00:02:58.424 LINK mkfs 00:02:58.424 LINK nvme_compliance 00:02:58.683 CC examples/accel/perf/accel_perf.o 00:02:58.683 LINK dif 00:02:58.683 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:58.683 CC examples/blob/hello_world/hello_blob.o 00:02:58.683 CC examples/blob/cli/blobcli.o 00:02:58.942 LINK hello_blob 00:02:58.942 LINK hello_fsdev 00:02:59.200 LINK blobcli 00:02:59.200 LINK accel_perf 00:02:59.200 LINK cuse 00:03:00.138 CC examples/bdev/bdevperf/bdevperf.o 00:03:00.138 CC examples/bdev/hello_world/hello_bdev.o 00:03:00.398 LINK hello_bdev 00:03:00.657 LINK bdevperf 00:03:00.657 CC test/bdev/bdevio/bdevio.o 00:03:00.916 LINK bdevio 00:03:02.295 CC examples/nvmf/nvmf/nvmf.o 00:03:02.295 LINK nvmf 00:03:03.231 LINK esnap 00:03:03.799 00:03:03.799 real 0m50.220s 00:03:03.799 user 8m12.688s 00:03:03.799 sys 2m34.974s 00:03:03.799 10:30:29 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:03.799 10:30:29 make -- common/autotest_common.sh@10 -- $ set +x 00:03:03.799 ************************************ 00:03:03.799 END TEST make 00:03:03.799 ************************************ 00:03:03.799 10:30:29 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:03.799 10:30:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:03.799 10:30:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:03.799 10:30:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:03.799 10:30:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:03.799 10:30:29 -- pm/common@44 -- $ pid=2733881 00:03:03.799 10:30:29 -- pm/common@50 -- $ kill -TERM 2733881 00:03:03.799 10:30:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:03.799 10:30:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:03.799 10:30:29 -- pm/common@44 -- $ pid=2733883 00:03:03.799 10:30:29 -- pm/common@50 -- $ kill -TERM 2733883 00:03:03.799 10:30:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:03.799 10:30:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:03.799 10:30:29 -- pm/common@44 -- $ pid=2733885 00:03:03.799 10:30:29 -- pm/common@50 -- $ kill -TERM 2733885 00:03:03.799 10:30:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:03.799 10:30:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:03.799 10:30:29 -- pm/common@44 -- $ pid=2733901 00:03:03.799 10:30:29 -- pm/common@50 -- $ sudo -E kill -TERM 2733901 00:03:03.799 10:30:29 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:03.799 10:30:29 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:03:03.799 10:30:29 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:03.799 10:30:29 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:03.799 10:30:29 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:03.799 10:30:29 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:03.799 10:30:29 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:03.799 10:30:29 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:03.799 10:30:29 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:03.800 10:30:29 -- scripts/common.sh@336 -- # IFS=.-: 00:03:03.800 10:30:29 -- scripts/common.sh@336 -- # read -ra ver1 00:03:03.800 10:30:29 -- scripts/common.sh@337 -- # IFS=.-: 00:03:03.800 10:30:29 -- scripts/common.sh@337 -- # read -ra ver2 00:03:03.800 10:30:29 -- scripts/common.sh@338 -- # local 'op=<' 00:03:03.800 10:30:29 -- scripts/common.sh@340 -- # ver1_l=2 00:03:03.800 10:30:29 -- scripts/common.sh@341 -- # ver2_l=1 00:03:03.800 10:30:29 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:03.800 10:30:29 -- scripts/common.sh@344 -- # case "$op" in 00:03:03.800 10:30:29 -- scripts/common.sh@345 -- # : 1 00:03:03.800 10:30:29 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:03.800 10:30:29 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:03.800 10:30:29 -- scripts/common.sh@365 -- # decimal 1 00:03:03.800 10:30:29 -- scripts/common.sh@353 -- # local d=1 00:03:03.800 10:30:29 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:03.800 10:30:29 -- scripts/common.sh@355 -- # echo 1 00:03:03.800 10:30:29 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:03.800 10:30:29 -- scripts/common.sh@366 -- # decimal 2 00:03:03.800 10:30:29 -- scripts/common.sh@353 -- # local d=2 00:03:03.800 10:30:29 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:03.800 10:30:29 -- scripts/common.sh@355 -- # echo 2 00:03:03.800 10:30:29 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:03.800 10:30:29 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:03.800 10:30:29 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:03.800 10:30:29 -- scripts/common.sh@368 -- # return 0 00:03:03.800 10:30:29 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:03.800 10:30:29 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:03.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:03.800 --rc genhtml_branch_coverage=1 00:03:03.800 --rc genhtml_function_coverage=1 00:03:03.800 --rc genhtml_legend=1 00:03:03.800 --rc geninfo_all_blocks=1 00:03:03.800 --rc geninfo_unexecuted_blocks=1 00:03:03.800 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:03.800 ' 00:03:03.800 10:30:29 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:03.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:03.800 --rc genhtml_branch_coverage=1 00:03:03.800 --rc genhtml_function_coverage=1 00:03:03.800 --rc genhtml_legend=1 00:03:03.800 --rc geninfo_all_blocks=1 00:03:03.800 --rc geninfo_unexecuted_blocks=1 00:03:03.800 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:03.800 ' 00:03:03.800 10:30:29 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:03.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:03.800 --rc genhtml_branch_coverage=1 00:03:03.800 --rc genhtml_function_coverage=1 00:03:03.800 --rc genhtml_legend=1 00:03:03.800 --rc geninfo_all_blocks=1 00:03:03.800 --rc geninfo_unexecuted_blocks=1 00:03:03.800 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:03.800 ' 00:03:03.800 10:30:29 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:03.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:03.800 --rc genhtml_branch_coverage=1 00:03:03.800 --rc genhtml_function_coverage=1 00:03:03.800 --rc genhtml_legend=1 00:03:03.800 --rc geninfo_all_blocks=1 00:03:03.800 --rc geninfo_unexecuted_blocks=1 00:03:03.800 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:03.800 ' 00:03:03.800 10:30:29 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:03:03.800 10:30:29 -- nvmf/common.sh@7 -- # uname -s 00:03:03.800 10:30:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:03.800 10:30:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:03.800 10:30:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:03.800 10:30:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:03.800 10:30:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:03.800 10:30:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:03.800 10:30:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:03.800 10:30:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:03.800 10:30:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:03.800 10:30:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:03.800 10:30:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:03:03.800 10:30:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:03:03.800 10:30:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:03.800 10:30:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:03.800 10:30:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:03.800 10:30:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:03.800 10:30:29 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:03.800 10:30:29 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:03.800 10:30:29 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:03.800 10:30:29 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:03.800 10:30:29 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:03.800 10:30:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.800 10:30:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.800 10:30:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.800 10:30:29 -- paths/export.sh@5 -- # export PATH 00:03:03.800 10:30:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.800 10:30:29 -- nvmf/common.sh@51 -- # : 0 00:03:04.060 10:30:29 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:04.060 10:30:29 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:04.060 10:30:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:04.060 10:30:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:04.060 10:30:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:04.060 10:30:29 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:04.060 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:04.060 10:30:29 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:04.060 10:30:29 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:04.060 10:30:29 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:04.060 10:30:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:04.060 10:30:29 -- spdk/autotest.sh@32 -- # uname -s 00:03:04.060 10:30:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:04.060 10:30:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:04.060 10:30:29 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:04.060 10:30:29 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:04.060 10:30:29 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:04.060 10:30:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:04.060 10:30:29 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:04.060 10:30:29 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:04.060 10:30:29 -- spdk/autotest.sh@48 -- # udevadm_pid=2794044 00:03:04.060 10:30:29 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:04.060 10:30:29 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:04.060 10:30:29 -- pm/common@17 -- # local monitor 00:03:04.060 10:30:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.060 10:30:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.060 10:30:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.060 10:30:29 -- pm/common@21 -- # date +%s 00:03:04.060 10:30:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.060 10:30:29 -- pm/common@21 -- # date +%s 00:03:04.060 10:30:29 -- pm/common@25 -- # sleep 1 00:03:04.060 10:30:29 -- pm/common@21 -- # date +%s 00:03:04.060 10:30:29 -- pm/common@21 -- # date +%s 00:03:04.060 10:30:29 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730799029 00:03:04.060 10:30:29 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730799029 00:03:04.060 10:30:29 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730799029 00:03:04.060 10:30:29 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730799029 00:03:04.060 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730799029_collect-cpu-load.pm.log 00:03:04.060 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730799029_collect-vmstat.pm.log 00:03:04.060 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730799029_collect-cpu-temp.pm.log 00:03:04.060 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730799029_collect-bmc-pm.bmc.pm.log 00:03:04.999 10:30:30 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:04.999 10:30:30 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:04.999 10:30:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:04.999 10:30:30 -- common/autotest_common.sh@10 -- # set +x 00:03:04.999 10:30:30 -- spdk/autotest.sh@59 -- # create_test_list 00:03:04.999 10:30:30 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:04.999 10:30:30 -- common/autotest_common.sh@10 -- # set +x 00:03:04.999 10:30:30 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:03:04.999 10:30:30 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:04.999 10:30:30 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:04.999 10:30:30 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:03:04.999 10:30:30 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:04.999 10:30:30 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:04.999 10:30:30 -- common/autotest_common.sh@1455 -- # uname 00:03:04.999 10:30:30 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:04.999 10:30:30 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:04.999 10:30:30 -- common/autotest_common.sh@1475 -- # uname 00:03:04.999 10:30:30 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:04.999 10:30:30 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:04.999 10:30:30 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:03:04.999 lcov: LCOV version 1.15 00:03:04.999 10:30:31 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:03:11.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:03:16.849 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:22.126 10:30:48 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:22.126 10:30:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:22.126 10:30:48 -- common/autotest_common.sh@10 -- # set +x 00:03:22.126 10:30:48 -- spdk/autotest.sh@78 -- # rm -f 00:03:22.126 10:30:48 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:25.512 0000:1a:00.0 (8086 0a54): Already using the nvme driver 00:03:25.512 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:25.512 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:25.512 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:25.798 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:25.798 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:25.798 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:25.798 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:25.798 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:25.798 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:25.798 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:25.798 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:25.798 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:25.798 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:26.057 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:26.057 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:26.057 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:28.591 10:30:54 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:28.591 10:30:54 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:28.591 10:30:54 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:28.591 10:30:54 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:28.591 10:30:54 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:28.591 10:30:54 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:28.591 10:30:54 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:28.591 10:30:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:28.591 10:30:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:28.591 10:30:54 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:28.591 10:30:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:28.591 10:30:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:28.591 10:30:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:28.591 10:30:54 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:28.591 10:30:54 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:28.591 No valid GPT data, bailing 00:03:28.591 10:30:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:28.591 10:30:54 -- scripts/common.sh@394 -- # pt= 00:03:28.591 10:30:54 -- scripts/common.sh@395 -- # return 1 00:03:28.591 10:30:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:28.591 1+0 records in 00:03:28.591 1+0 records out 00:03:28.591 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00562626 s, 186 MB/s 00:03:28.591 10:30:54 -- spdk/autotest.sh@105 -- # sync 00:03:28.591 10:30:54 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:28.591 10:30:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:28.591 10:30:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:35.158 10:31:00 -- spdk/autotest.sh@111 -- # uname -s 00:03:35.158 10:31:00 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:35.158 10:31:00 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:03:35.158 10:31:00 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:35.158 10:31:00 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:35.158 10:31:00 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:35.158 10:31:00 -- common/autotest_common.sh@10 -- # set +x 00:03:35.158 ************************************ 00:03:35.158 START TEST setup.sh 00:03:35.158 ************************************ 00:03:35.159 10:31:00 setup.sh -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:35.159 * Looking for test storage... 00:03:35.159 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:35.159 10:31:00 setup.sh -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:35.159 10:31:00 setup.sh -- common/autotest_common.sh@1691 -- # lcov --version 00:03:35.159 10:31:00 setup.sh -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:35.159 10:31:00 setup.sh -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@345 -- # : 1 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@353 -- # local d=1 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@355 -- # echo 1 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@353 -- # local d=2 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@355 -- # echo 2 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:35.159 10:31:00 setup.sh -- scripts/common.sh@368 -- # return 0 00:03:35.159 10:31:00 setup.sh -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:35.159 10:31:00 setup.sh -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:35.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.159 --rc genhtml_branch_coverage=1 00:03:35.159 --rc genhtml_function_coverage=1 00:03:35.159 --rc genhtml_legend=1 00:03:35.159 --rc geninfo_all_blocks=1 00:03:35.159 --rc geninfo_unexecuted_blocks=1 00:03:35.159 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:35.159 ' 00:03:35.159 10:31:00 setup.sh -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:35.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.159 --rc genhtml_branch_coverage=1 00:03:35.159 --rc genhtml_function_coverage=1 00:03:35.159 --rc genhtml_legend=1 00:03:35.159 --rc geninfo_all_blocks=1 00:03:35.159 --rc geninfo_unexecuted_blocks=1 00:03:35.159 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:35.159 ' 00:03:35.159 10:31:00 setup.sh -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:35.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.159 --rc genhtml_branch_coverage=1 00:03:35.159 --rc genhtml_function_coverage=1 00:03:35.159 --rc genhtml_legend=1 00:03:35.159 --rc geninfo_all_blocks=1 00:03:35.159 --rc geninfo_unexecuted_blocks=1 00:03:35.159 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:35.159 ' 00:03:35.159 10:31:00 setup.sh -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:35.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.159 --rc genhtml_branch_coverage=1 00:03:35.159 --rc genhtml_function_coverage=1 00:03:35.159 --rc genhtml_legend=1 00:03:35.159 --rc geninfo_all_blocks=1 00:03:35.159 --rc geninfo_unexecuted_blocks=1 00:03:35.159 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:35.159 ' 00:03:35.159 10:31:00 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:35.159 10:31:00 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:35.159 10:31:00 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:35.159 10:31:00 setup.sh -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:35.159 10:31:00 setup.sh -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:35.159 10:31:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:35.159 ************************************ 00:03:35.159 START TEST acl 00:03:35.159 ************************************ 00:03:35.159 10:31:00 setup.sh.acl -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:35.159 * Looking for test storage... 00:03:35.159 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:35.159 10:31:00 setup.sh.acl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:35.159 10:31:00 setup.sh.acl -- common/autotest_common.sh@1691 -- # lcov --version 00:03:35.159 10:31:00 setup.sh.acl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:35.159 10:31:00 setup.sh.acl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:35.159 10:31:00 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:03:35.159 10:31:00 setup.sh.acl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:35.159 10:31:00 setup.sh.acl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:35.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.159 --rc genhtml_branch_coverage=1 00:03:35.159 --rc genhtml_function_coverage=1 00:03:35.159 --rc genhtml_legend=1 00:03:35.159 --rc geninfo_all_blocks=1 00:03:35.159 --rc geninfo_unexecuted_blocks=1 00:03:35.159 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:35.159 ' 00:03:35.159 10:31:00 setup.sh.acl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:35.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.159 --rc genhtml_branch_coverage=1 00:03:35.159 --rc genhtml_function_coverage=1 00:03:35.159 --rc genhtml_legend=1 00:03:35.159 --rc geninfo_all_blocks=1 00:03:35.159 --rc geninfo_unexecuted_blocks=1 00:03:35.159 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:35.159 ' 00:03:35.159 10:31:00 setup.sh.acl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:35.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.159 --rc genhtml_branch_coverage=1 00:03:35.159 --rc genhtml_function_coverage=1 00:03:35.159 --rc genhtml_legend=1 00:03:35.159 --rc geninfo_all_blocks=1 00:03:35.159 --rc geninfo_unexecuted_blocks=1 00:03:35.159 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:35.159 ' 00:03:35.159 10:31:00 setup.sh.acl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:35.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.159 --rc genhtml_branch_coverage=1 00:03:35.159 --rc genhtml_function_coverage=1 00:03:35.159 --rc genhtml_legend=1 00:03:35.159 --rc geninfo_all_blocks=1 00:03:35.159 --rc geninfo_unexecuted_blocks=1 00:03:35.159 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:35.159 ' 00:03:35.159 10:31:00 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:35.159 10:31:00 setup.sh.acl -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:35.159 10:31:00 setup.sh.acl -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:35.159 10:31:00 setup.sh.acl -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:35.159 10:31:00 setup.sh.acl -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:35.159 10:31:00 setup.sh.acl -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:35.159 10:31:00 setup.sh.acl -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:35.159 10:31:00 setup.sh.acl -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:35.159 10:31:00 setup.sh.acl -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:35.160 10:31:00 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:35.160 10:31:00 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:35.160 10:31:00 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:35.160 10:31:00 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:35.160 10:31:00 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:35.160 10:31:00 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.160 10:31:00 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.727 10:31:07 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:41.728 10:31:07 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:41.728 10:31:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.728 10:31:07 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:41.728 10:31:07 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.728 10:31:07 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:44.262 Hugepages 00:03:44.262 node hugesize free / total 00:03:44.262 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:44.262 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:44.262 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.262 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:44.262 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:44.262 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.262 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:44.262 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:44.262 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.262 00:03:44.263 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:1a:00.0 == *:*:*.* ]] 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:44.263 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:44.522 10:31:10 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:44.522 10:31:10 setup.sh.acl -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:44.522 10:31:10 setup.sh.acl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:44.522 10:31:10 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:44.522 ************************************ 00:03:44.522 START TEST denied 00:03:44.522 ************************************ 00:03:44.522 10:31:10 setup.sh.acl.denied -- common/autotest_common.sh@1127 -- # denied 00:03:44.522 10:31:10 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:1a:00.0' 00:03:44.522 10:31:10 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:1a:00.0' 00:03:44.522 10:31:10 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:44.522 10:31:10 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.522 10:31:10 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:51.091 0000:1a:00.0 (8086 0a54): Skipping denied controller at 0000:1a:00.0 00:03:51.091 10:31:16 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:1a:00.0 00:03:51.091 10:31:16 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:51.091 10:31:16 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:51.091 10:31:16 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:1a:00.0 ]] 00:03:51.091 10:31:16 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:1a:00.0/driver 00:03:51.091 10:31:16 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:51.091 10:31:16 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:51.091 10:31:16 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:51.091 10:31:16 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.091 10:31:16 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:57.665 00:03:57.665 real 0m12.871s 00:03:57.665 user 0m3.636s 00:03:57.665 sys 0m8.117s 00:03:57.665 10:31:23 setup.sh.acl.denied -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:57.665 10:31:23 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:57.665 ************************************ 00:03:57.665 END TEST denied 00:03:57.665 ************************************ 00:03:57.665 10:31:23 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:57.665 10:31:23 setup.sh.acl -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:57.665 10:31:23 setup.sh.acl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:57.665 10:31:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:57.665 ************************************ 00:03:57.665 START TEST allowed 00:03:57.665 ************************************ 00:03:57.665 10:31:23 setup.sh.acl.allowed -- common/autotest_common.sh@1127 -- # allowed 00:03:57.665 10:31:23 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:1a:00.0 00:03:57.665 10:31:23 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:57.665 10:31:23 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:1a:00.0 .*: nvme -> .*' 00:03:57.665 10:31:23 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.665 10:31:23 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:07.647 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:04:07.647 10:31:32 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:07.647 10:31:32 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:07.647 10:31:32 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:07.647 10:31:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.647 10:31:32 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.219 00:04:14.219 real 0m15.839s 00:04:14.219 user 0m4.229s 00:04:14.219 sys 0m8.388s 00:04:14.219 10:31:39 setup.sh.acl.allowed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:14.219 10:31:39 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:14.219 ************************************ 00:04:14.219 END TEST allowed 00:04:14.219 ************************************ 00:04:14.219 00:04:14.219 real 0m38.878s 00:04:14.219 user 0m11.146s 00:04:14.219 sys 0m23.363s 00:04:14.219 10:31:39 setup.sh.acl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:14.219 10:31:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:14.219 ************************************ 00:04:14.219 END TEST acl 00:04:14.219 ************************************ 00:04:14.219 10:31:39 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:14.219 10:31:39 setup.sh -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:14.219 10:31:39 setup.sh -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.219 10:31:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:14.219 ************************************ 00:04:14.219 START TEST hugepages 00:04:14.219 ************************************ 00:04:14.219 10:31:39 setup.sh.hugepages -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:14.219 * Looking for test storage... 00:04:14.219 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:14.219 10:31:39 setup.sh.hugepages -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:14.219 10:31:39 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # lcov --version 00:04:14.219 10:31:39 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:14.219 10:31:39 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:14.219 10:31:39 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.219 10:31:39 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.219 10:31:39 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.219 10:31:39 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.219 10:31:39 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.219 10:31:39 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.219 10:31:39 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.219 10:31:39 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.219 10:31:39 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.219 10:31:39 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.219 10:31:39 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.219 10:31:39 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:04:14.219 10:31:39 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:04:14.219 10:31:39 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.219 10:31:39 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.219 10:31:39 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:04:14.219 10:31:39 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:04:14.220 10:31:39 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:04:14.220 10:31:39 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.220 10:31:39 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:04:14.220 10:31:39 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:04:14.220 10:31:39 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:04:14.220 10:31:39 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.220 10:31:39 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.220 10:31:39 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.220 10:31:39 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:04:14.220 10:31:39 setup.sh.hugepages -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.220 10:31:39 setup.sh.hugepages -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:14.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.220 --rc genhtml_branch_coverage=1 00:04:14.220 --rc genhtml_function_coverage=1 00:04:14.220 --rc genhtml_legend=1 00:04:14.220 --rc geninfo_all_blocks=1 00:04:14.220 --rc geninfo_unexecuted_blocks=1 00:04:14.220 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:14.220 ' 00:04:14.220 10:31:39 setup.sh.hugepages -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:14.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.220 --rc genhtml_branch_coverage=1 00:04:14.220 --rc genhtml_function_coverage=1 00:04:14.220 --rc genhtml_legend=1 00:04:14.220 --rc geninfo_all_blocks=1 00:04:14.220 --rc geninfo_unexecuted_blocks=1 00:04:14.220 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:14.220 ' 00:04:14.220 10:31:39 setup.sh.hugepages -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:14.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.220 --rc genhtml_branch_coverage=1 00:04:14.220 --rc genhtml_function_coverage=1 00:04:14.220 --rc genhtml_legend=1 00:04:14.220 --rc geninfo_all_blocks=1 00:04:14.220 --rc geninfo_unexecuted_blocks=1 00:04:14.220 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:14.220 ' 00:04:14.220 10:31:39 setup.sh.hugepages -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:14.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.220 --rc genhtml_branch_coverage=1 00:04:14.220 --rc genhtml_function_coverage=1 00:04:14.220 --rc genhtml_legend=1 00:04:14.220 --rc geninfo_all_blocks=1 00:04:14.220 --rc geninfo_unexecuted_blocks=1 00:04:14.220 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:14.220 ' 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 64274660 kB' 'MemAvailable: 70290964 kB' 'Buffers: 30740 kB' 'Cached: 20094680 kB' 'SwapCached: 0 kB' 'Active: 14940744 kB' 'Inactive: 5750580 kB' 'Active(anon): 14425272 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569244 kB' 'Mapped: 215468 kB' 'Shmem: 13859368 kB' 'KReclaimable: 586172 kB' 'Slab: 1229348 kB' 'SReclaimable: 586172 kB' 'SUnreclaim: 643176 kB' 'KernelStack: 17632 kB' 'PageTables: 9076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434172 kB' 'Committed_AS: 15736480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215040 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 10:31:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:14.222 10:31:39 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:04:14.222 10:31:39 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:14.222 10:31:39 setup.sh.hugepages -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.222 10:31:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.222 ************************************ 00:04:14.222 START TEST single_node_setup 00:04:14.222 ************************************ 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1127 -- # single_node_setup 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.222 10:31:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:17.510 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:17.510 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:17.510 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:17.510 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:17.510 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:17.510 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:17.510 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:17.510 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:17.510 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:17.510 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:17.510 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:17.510 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:17.510 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:17.510 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:17.510 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:17.510 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:20.802 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66436372 kB' 'MemAvailable: 72452516 kB' 'Buffers: 30740 kB' 'Cached: 20094856 kB' 'SwapCached: 0 kB' 'Active: 14943308 kB' 'Inactive: 5750580 kB' 'Active(anon): 14427836 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571728 kB' 'Mapped: 214900 kB' 'Shmem: 13859544 kB' 'KReclaimable: 586012 kB' 'Slab: 1228792 kB' 'SReclaimable: 586012 kB' 'SUnreclaim: 642780 kB' 'KernelStack: 17552 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15742968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214992 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.713 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66436876 kB' 'MemAvailable: 72453020 kB' 'Buffers: 30740 kB' 'Cached: 20094856 kB' 'SwapCached: 0 kB' 'Active: 14943372 kB' 'Inactive: 5750580 kB' 'Active(anon): 14427900 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571752 kB' 'Mapped: 214852 kB' 'Shmem: 13859544 kB' 'KReclaimable: 586012 kB' 'Slab: 1228792 kB' 'SReclaimable: 586012 kB' 'SUnreclaim: 642780 kB' 'KernelStack: 17600 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15742988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214992 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.714 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66435996 kB' 'MemAvailable: 72452140 kB' 'Buffers: 30740 kB' 'Cached: 20094876 kB' 'SwapCached: 0 kB' 'Active: 14943364 kB' 'Inactive: 5750580 kB' 'Active(anon): 14427892 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571716 kB' 'Mapped: 214852 kB' 'Shmem: 13859564 kB' 'KReclaimable: 586012 kB' 'Slab: 1228792 kB' 'SReclaimable: 586012 kB' 'SUnreclaim: 642780 kB' 'KernelStack: 17584 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15743008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214992 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.715 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:22.716 nr_hugepages=1024 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:22.716 resv_hugepages=0 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:22.716 surplus_hugepages=0 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:22.716 anon_hugepages=0 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66436356 kB' 'MemAvailable: 72452500 kB' 'Buffers: 30740 kB' 'Cached: 20094896 kB' 'SwapCached: 0 kB' 'Active: 14943080 kB' 'Inactive: 5750580 kB' 'Active(anon): 14427608 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571436 kB' 'Mapped: 214852 kB' 'Shmem: 13859584 kB' 'KReclaimable: 586012 kB' 'Slab: 1228696 kB' 'SReclaimable: 586012 kB' 'SUnreclaim: 642684 kB' 'KernelStack: 17600 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15743032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214992 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.716 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 35803224 kB' 'MemUsed: 12261640 kB' 'SwapCached: 0 kB' 'Active: 6912108 kB' 'Inactive: 1198740 kB' 'Active(anon): 6679848 kB' 'Inactive(anon): 0 kB' 'Active(file): 232260 kB' 'Inactive(file): 1198740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7695956 kB' 'Mapped: 96196 kB' 'AnonPages: 418112 kB' 'Shmem: 6264956 kB' 'KernelStack: 10424 kB' 'PageTables: 5456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271084 kB' 'Slab: 611800 kB' 'SReclaimable: 271084 kB' 'SUnreclaim: 340716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:22.717 node0=1024 expecting 1024 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:22.717 00:04:22.717 real 0m9.195s 00:04:22.717 user 0m1.807s 00:04:22.717 sys 0m4.065s 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:22.717 10:31:48 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:04:22.717 ************************************ 00:04:22.717 END TEST single_node_setup 00:04:22.717 ************************************ 00:04:22.976 10:31:48 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:04:22.976 10:31:48 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:22.976 10:31:48 setup.sh.hugepages -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:22.976 10:31:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:22.976 ************************************ 00:04:22.976 START TEST even_2G_alloc 00:04:22.976 ************************************ 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1127 -- # even_2G_alloc 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.976 10:31:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:27.168 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:27.168 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:27.168 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:27.168 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:27.168 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:27.168 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:27.168 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:27.168 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:27.168 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:27.168 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:27.168 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:27.168 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:27.168 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:27.168 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:27.168 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:27.168 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:27.168 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66481996 kB' 'MemAvailable: 72498076 kB' 'Buffers: 30740 kB' 'Cached: 20095052 kB' 'SwapCached: 0 kB' 'Active: 14943984 kB' 'Inactive: 5750580 kB' 'Active(anon): 14428512 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572124 kB' 'Mapped: 214060 kB' 'Shmem: 13859740 kB' 'KReclaimable: 585948 kB' 'Slab: 1229536 kB' 'SReclaimable: 585948 kB' 'SUnreclaim: 643588 kB' 'KernelStack: 17616 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15733748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215152 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.078 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.079 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66481996 kB' 'MemAvailable: 72498076 kB' 'Buffers: 30740 kB' 'Cached: 20095068 kB' 'SwapCached: 0 kB' 'Active: 14944068 kB' 'Inactive: 5750580 kB' 'Active(anon): 14428596 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572156 kB' 'Mapped: 214060 kB' 'Shmem: 13859756 kB' 'KReclaimable: 585948 kB' 'Slab: 1229536 kB' 'SReclaimable: 585948 kB' 'SUnreclaim: 643588 kB' 'KernelStack: 17632 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15733768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215152 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.080 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.081 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66481248 kB' 'MemAvailable: 72497328 kB' 'Buffers: 30740 kB' 'Cached: 20095072 kB' 'SwapCached: 0 kB' 'Active: 14945008 kB' 'Inactive: 5750580 kB' 'Active(anon): 14429536 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573596 kB' 'Mapped: 214564 kB' 'Shmem: 13859760 kB' 'KReclaimable: 585948 kB' 'Slab: 1229536 kB' 'SReclaimable: 585948 kB' 'SUnreclaim: 643588 kB' 'KernelStack: 17648 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15735276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215136 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.082 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:29.084 nr_hugepages=1024 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:29.084 resv_hugepages=0 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:29.084 surplus_hugepages=0 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:29.084 anon_hugepages=0 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66482092 kB' 'MemAvailable: 72498172 kB' 'Buffers: 30740 kB' 'Cached: 20095092 kB' 'SwapCached: 0 kB' 'Active: 14950076 kB' 'Inactive: 5750580 kB' 'Active(anon): 14434604 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577772 kB' 'Mapped: 214824 kB' 'Shmem: 13859780 kB' 'KReclaimable: 585948 kB' 'Slab: 1229536 kB' 'SReclaimable: 585948 kB' 'SUnreclaim: 643588 kB' 'KernelStack: 17632 kB' 'PageTables: 8816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15739928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215140 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 36867844 kB' 'MemUsed: 11197020 kB' 'SwapCached: 0 kB' 'Active: 6913760 kB' 'Inactive: 1198740 kB' 'Active(anon): 6681500 kB' 'Inactive(anon): 0 kB' 'Active(file): 232260 kB' 'Inactive(file): 1198740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7696052 kB' 'Mapped: 95812 kB' 'AnonPages: 419624 kB' 'Shmem: 6265052 kB' 'KernelStack: 10424 kB' 'PageTables: 5584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271084 kB' 'Slab: 611388 kB' 'SReclaimable: 271084 kB' 'SUnreclaim: 340304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220576 kB' 'MemFree: 29614248 kB' 'MemUsed: 14606328 kB' 'SwapCached: 0 kB' 'Active: 8030572 kB' 'Inactive: 4551840 kB' 'Active(anon): 7747360 kB' 'Inactive(anon): 0 kB' 'Active(file): 283212 kB' 'Inactive(file): 4551840 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12429824 kB' 'Mapped: 118248 kB' 'AnonPages: 152752 kB' 'Shmem: 7594772 kB' 'KernelStack: 7208 kB' 'PageTables: 3208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 314864 kB' 'Slab: 618148 kB' 'SReclaimable: 314864 kB' 'SUnreclaim: 303284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.087 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.088 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.089 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:29.089 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:29.089 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:29.089 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:29.089 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:29.089 node0=512 expecting 512 00:04:29.089 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:29.089 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:29.089 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:29.089 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:29.089 node1=512 expecting 512 00:04:29.089 10:31:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:04:29.089 00:04:29.089 real 0m6.146s 00:04:29.089 user 0m1.935s 00:04:29.089 sys 0m4.151s 00:04:29.089 10:31:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:29.089 10:31:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.089 ************************************ 00:04:29.089 END TEST even_2G_alloc 00:04:29.089 ************************************ 00:04:29.089 10:31:55 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:04:29.089 10:31:55 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:29.089 10:31:55 setup.sh.hugepages -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:29.089 10:31:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.089 ************************************ 00:04:29.089 START TEST odd_alloc 00:04:29.089 ************************************ 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1127 -- # odd_alloc 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.089 10:31:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:32.380 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:32.380 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:32.380 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:32.380 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:32.380 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:32.380 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:32.380 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:32.380 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:32.380 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:32.380 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:32.639 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:32.639 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:32.639 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:32.639 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:32.639 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:32.639 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:32.639 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66454228 kB' 'MemAvailable: 72470308 kB' 'Buffers: 30740 kB' 'Cached: 20095260 kB' 'SwapCached: 0 kB' 'Active: 14944960 kB' 'Inactive: 5750580 kB' 'Active(anon): 14429488 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572412 kB' 'Mapped: 214180 kB' 'Shmem: 13859948 kB' 'KReclaimable: 585948 kB' 'Slab: 1229712 kB' 'SReclaimable: 585948 kB' 'SUnreclaim: 643764 kB' 'KernelStack: 17552 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481724 kB' 'Committed_AS: 15734476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215056 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.185 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66453976 kB' 'MemAvailable: 72470056 kB' 'Buffers: 30740 kB' 'Cached: 20095264 kB' 'SwapCached: 0 kB' 'Active: 14944676 kB' 'Inactive: 5750580 kB' 'Active(anon): 14429204 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572664 kB' 'Mapped: 214132 kB' 'Shmem: 13859952 kB' 'KReclaimable: 585948 kB' 'Slab: 1229712 kB' 'SReclaimable: 585948 kB' 'SUnreclaim: 643764 kB' 'KernelStack: 17568 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481724 kB' 'Committed_AS: 15734492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215040 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.186 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66453220 kB' 'MemAvailable: 72469300 kB' 'Buffers: 30740 kB' 'Cached: 20095268 kB' 'SwapCached: 0 kB' 'Active: 14944548 kB' 'Inactive: 5750580 kB' 'Active(anon): 14429076 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572536 kB' 'Mapped: 214132 kB' 'Shmem: 13859956 kB' 'KReclaimable: 585948 kB' 'Slab: 1229712 kB' 'SReclaimable: 585948 kB' 'SUnreclaim: 643764 kB' 'KernelStack: 17568 kB' 'PageTables: 8716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481724 kB' 'Committed_AS: 15735648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215024 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.187 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.188 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:04:35.189 nr_hugepages=1025 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:35.189 resv_hugepages=0 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:35.189 surplus_hugepages=0 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:35.189 anon_hugepages=0 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66452228 kB' 'MemAvailable: 72468308 kB' 'Buffers: 30740 kB' 'Cached: 20095300 kB' 'SwapCached: 0 kB' 'Active: 14945092 kB' 'Inactive: 5750580 kB' 'Active(anon): 14429620 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573012 kB' 'Mapped: 214132 kB' 'Shmem: 13859988 kB' 'KReclaimable: 585948 kB' 'Slab: 1229712 kB' 'SReclaimable: 585948 kB' 'SUnreclaim: 643764 kB' 'KernelStack: 17536 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481724 kB' 'Committed_AS: 15737176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215008 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.189 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 36841280 kB' 'MemUsed: 11223584 kB' 'SwapCached: 0 kB' 'Active: 6915044 kB' 'Inactive: 1198740 kB' 'Active(anon): 6682784 kB' 'Inactive(anon): 0 kB' 'Active(file): 232260 kB' 'Inactive(file): 1198740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7696096 kB' 'Mapped: 95916 kB' 'AnonPages: 420888 kB' 'Shmem: 6265096 kB' 'KernelStack: 10616 kB' 'PageTables: 6376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271084 kB' 'Slab: 611468 kB' 'SReclaimable: 271084 kB' 'SUnreclaim: 340384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.190 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:35.191 10:32:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220576 kB' 'MemFree: 29609484 kB' 'MemUsed: 14611092 kB' 'SwapCached: 0 kB' 'Active: 8030664 kB' 'Inactive: 4551840 kB' 'Active(anon): 7747452 kB' 'Inactive(anon): 0 kB' 'Active(file): 283212 kB' 'Inactive(file): 4551840 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12429960 kB' 'Mapped: 118232 kB' 'AnonPages: 152200 kB' 'Shmem: 7594908 kB' 'KernelStack: 7080 kB' 'PageTables: 2956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 314864 kB' 'Slab: 618244 kB' 'SReclaimable: 314864 kB' 'SUnreclaim: 303380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.191 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:04:35.192 node0=513 expecting 513 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:35.192 node1=512 expecting 512 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:35.192 00:04:35.192 real 0m5.976s 00:04:35.192 user 0m1.851s 00:04:35.192 sys 0m3.927s 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:35.192 10:32:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:35.192 ************************************ 00:04:35.192 END TEST odd_alloc 00:04:35.192 ************************************ 00:04:35.192 10:32:01 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:04:35.192 10:32:01 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:35.192 10:32:01 setup.sh.hugepages -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:35.192 10:32:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:35.192 ************************************ 00:04:35.192 START TEST custom_alloc 00:04:35.192 ************************************ 00:04:35.192 10:32:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1127 -- # custom_alloc 00:04:35.192 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:04:35.192 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:04:35.192 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:04:35.192 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:04:35.192 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:35.192 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:04:35.192 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:04:35.192 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:35.192 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:35.192 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:04:35.192 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:35.192 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:35.192 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:35.192 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:04:35.192 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.193 10:32:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:39.383 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:39.383 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:39.383 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:39.383 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:39.383 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:39.383 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:39.383 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:39.383 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:39.383 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:39.383 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:39.383 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:39.383 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:39.383 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:39.383 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:39.383 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:39.383 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:39.383 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 65366468 kB' 'MemAvailable: 71382548 kB' 'Buffers: 30740 kB' 'Cached: 20095460 kB' 'SwapCached: 0 kB' 'Active: 14946516 kB' 'Inactive: 5750580 kB' 'Active(anon): 14431044 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574140 kB' 'Mapped: 214404 kB' 'Shmem: 13860148 kB' 'KReclaimable: 585948 kB' 'Slab: 1228904 kB' 'SReclaimable: 585948 kB' 'SUnreclaim: 642956 kB' 'KernelStack: 17488 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958460 kB' 'Committed_AS: 15734952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215088 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 65366116 kB' 'MemAvailable: 71382196 kB' 'Buffers: 30740 kB' 'Cached: 20095464 kB' 'SwapCached: 0 kB' 'Active: 14946332 kB' 'Inactive: 5750580 kB' 'Active(anon): 14430860 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573968 kB' 'Mapped: 214712 kB' 'Shmem: 13860152 kB' 'KReclaimable: 585948 kB' 'Slab: 1228848 kB' 'SReclaimable: 585948 kB' 'SUnreclaim: 642900 kB' 'KernelStack: 17520 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958460 kB' 'Committed_AS: 15736328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215040 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.297 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 65362684 kB' 'MemAvailable: 71378764 kB' 'Buffers: 30740 kB' 'Cached: 20095480 kB' 'SwapCached: 0 kB' 'Active: 14951408 kB' 'Inactive: 5750580 kB' 'Active(anon): 14435936 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579080 kB' 'Mapped: 214928 kB' 'Shmem: 13860168 kB' 'KReclaimable: 585948 kB' 'Slab: 1228840 kB' 'SReclaimable: 585948 kB' 'SUnreclaim: 642892 kB' 'KernelStack: 17520 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958460 kB' 'Committed_AS: 15741248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215028 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.299 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:04:41.300 nr_hugepages=1536 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:41.300 resv_hugepages=0 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:41.300 surplus_hugepages=0 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:41.300 anon_hugepages=0 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 65366820 kB' 'MemAvailable: 71382900 kB' 'Buffers: 30740 kB' 'Cached: 20095508 kB' 'SwapCached: 0 kB' 'Active: 14946520 kB' 'Inactive: 5750580 kB' 'Active(anon): 14431048 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574180 kB' 'Mapped: 214492 kB' 'Shmem: 13860196 kB' 'KReclaimable: 585948 kB' 'Slab: 1228840 kB' 'SReclaimable: 585948 kB' 'SUnreclaim: 642892 kB' 'KernelStack: 17568 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958460 kB' 'Committed_AS: 15735516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215056 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.300 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:41.301 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 36813224 kB' 'MemUsed: 11251640 kB' 'SwapCached: 0 kB' 'Active: 6913528 kB' 'Inactive: 1198740 kB' 'Active(anon): 6681268 kB' 'Inactive(anon): 0 kB' 'Active(file): 232260 kB' 'Inactive(file): 1198740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7696132 kB' 'Mapped: 95956 kB' 'AnonPages: 419248 kB' 'Shmem: 6265132 kB' 'KernelStack: 10408 kB' 'PageTables: 5512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271084 kB' 'Slab: 610836 kB' 'SReclaimable: 271084 kB' 'SUnreclaim: 339752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220576 kB' 'MemFree: 28554228 kB' 'MemUsed: 15666348 kB' 'SwapCached: 0 kB' 'Active: 8032848 kB' 'Inactive: 4551840 kB' 'Active(anon): 7749636 kB' 'Inactive(anon): 0 kB' 'Active(file): 283212 kB' 'Inactive(file): 4551840 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12430160 kB' 'Mapped: 118252 kB' 'AnonPages: 154748 kB' 'Shmem: 7595108 kB' 'KernelStack: 7144 kB' 'PageTables: 3156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 314864 kB' 'Slab: 618004 kB' 'SReclaimable: 314864 kB' 'SUnreclaim: 303140 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.302 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:41.303 node0=512 expecting 512 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:04:41.303 node1=1024 expecting 1024 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:41.303 00:04:41.303 real 0m6.027s 00:04:41.303 user 0m1.964s 00:04:41.303 sys 0m4.024s 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.303 10:32:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:41.303 ************************************ 00:04:41.303 END TEST custom_alloc 00:04:41.303 ************************************ 00:04:41.303 10:32:07 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:41.303 10:32:07 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:41.303 10:32:07 setup.sh.hugepages -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.303 10:32:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:41.303 ************************************ 00:04:41.303 START TEST no_shrink_alloc 00:04:41.303 ************************************ 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1127 -- # no_shrink_alloc 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.303 10:32:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:45.577 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:45.577 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:45.577 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:45.577 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:45.577 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:45.578 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:45.578 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:45.578 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:45.578 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:45.578 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:45.578 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:45.578 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:45.578 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:45.578 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:45.578 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:45.578 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:45.578 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66414144 kB' 'MemAvailable: 72430224 kB' 'Buffers: 30740 kB' 'Cached: 20095660 kB' 'SwapCached: 0 kB' 'Active: 14947216 kB' 'Inactive: 5750580 kB' 'Active(anon): 14431744 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574572 kB' 'Mapped: 214252 kB' 'Shmem: 13860348 kB' 'KReclaimable: 585948 kB' 'Slab: 1229228 kB' 'SReclaimable: 585948 kB' 'SUnreclaim: 643280 kB' 'KernelStack: 17776 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15738808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215296 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.487 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.488 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66414092 kB' 'MemAvailable: 72430172 kB' 'Buffers: 30740 kB' 'Cached: 20095664 kB' 'SwapCached: 0 kB' 'Active: 14947020 kB' 'Inactive: 5750580 kB' 'Active(anon): 14431548 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574456 kB' 'Mapped: 214244 kB' 'Shmem: 13860352 kB' 'KReclaimable: 585948 kB' 'Slab: 1229228 kB' 'SReclaimable: 585948 kB' 'SUnreclaim: 643280 kB' 'KernelStack: 17760 kB' 'PageTables: 8932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15737320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215168 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.489 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.490 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66414328 kB' 'MemAvailable: 72430408 kB' 'Buffers: 30740 kB' 'Cached: 20095700 kB' 'SwapCached: 0 kB' 'Active: 14946328 kB' 'Inactive: 5750580 kB' 'Active(anon): 14430856 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573684 kB' 'Mapped: 214244 kB' 'Shmem: 13860388 kB' 'KReclaimable: 585948 kB' 'Slab: 1229132 kB' 'SReclaimable: 585948 kB' 'SUnreclaim: 643184 kB' 'KernelStack: 17536 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15736212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215104 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.491 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.492 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:47.493 nr_hugepages=1024 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:47.493 resv_hugepages=0 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:47.493 surplus_hugepages=0 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:47.493 anon_hugepages=0 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66413320 kB' 'MemAvailable: 72429400 kB' 'Buffers: 30740 kB' 'Cached: 20095728 kB' 'SwapCached: 0 kB' 'Active: 14946564 kB' 'Inactive: 5750580 kB' 'Active(anon): 14431092 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573940 kB' 'Mapped: 214212 kB' 'Shmem: 13860416 kB' 'KReclaimable: 585948 kB' 'Slab: 1229124 kB' 'SReclaimable: 585948 kB' 'SUnreclaim: 643176 kB' 'KernelStack: 17616 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15736232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215104 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.493 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.494 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 35770736 kB' 'MemUsed: 12294128 kB' 'SwapCached: 0 kB' 'Active: 6913648 kB' 'Inactive: 1198740 kB' 'Active(anon): 6681388 kB' 'Inactive(anon): 0 kB' 'Active(file): 232260 kB' 'Inactive(file): 1198740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7696164 kB' 'Mapped: 96008 kB' 'AnonPages: 419448 kB' 'Shmem: 6265164 kB' 'KernelStack: 10488 kB' 'PageTables: 5696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271084 kB' 'Slab: 611040 kB' 'SReclaimable: 271084 kB' 'SUnreclaim: 339956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.495 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:47.496 node0=1024 expecting 1024 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.496 10:32:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:50.779 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:50.779 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:50.779 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:50.779 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:50.779 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:50.779 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:50.779 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:50.779 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:50.779 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:50.779 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:50.779 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:50.779 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:50.779 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:50.779 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:50.779 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:50.779 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:50.779 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:53.318 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66420816 kB' 'MemAvailable: 72436896 kB' 'Buffers: 30740 kB' 'Cached: 20095840 kB' 'SwapCached: 0 kB' 'Active: 14947352 kB' 'Inactive: 5750580 kB' 'Active(anon): 14431880 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574592 kB' 'Mapped: 214356 kB' 'Shmem: 13860528 kB' 'KReclaimable: 585948 kB' 'Slab: 1228928 kB' 'SReclaimable: 585948 kB' 'SUnreclaim: 642980 kB' 'KernelStack: 17568 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15736864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215184 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.318 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.319 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66419672 kB' 'MemAvailable: 72435752 kB' 'Buffers: 30740 kB' 'Cached: 20095844 kB' 'SwapCached: 0 kB' 'Active: 14947124 kB' 'Inactive: 5750580 kB' 'Active(anon): 14431652 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574340 kB' 'Mapped: 214348 kB' 'Shmem: 13860532 kB' 'KReclaimable: 585948 kB' 'Slab: 1228928 kB' 'SReclaimable: 585948 kB' 'SUnreclaim: 642980 kB' 'KernelStack: 17584 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15736884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215168 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.320 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:53.321 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66419424 kB' 'MemAvailable: 72435504 kB' 'Buffers: 30740 kB' 'Cached: 20095860 kB' 'SwapCached: 0 kB' 'Active: 14947092 kB' 'Inactive: 5750580 kB' 'Active(anon): 14431620 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574840 kB' 'Mapped: 214348 kB' 'Shmem: 13860548 kB' 'KReclaimable: 585948 kB' 'Slab: 1228928 kB' 'SReclaimable: 585948 kB' 'SUnreclaim: 642980 kB' 'KernelStack: 17584 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15737628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215152 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.322 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.323 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:53.324 nr_hugepages=1024 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:53.324 resv_hugepages=0 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:53.324 surplus_hugepages=0 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:53.324 anon_hugepages=0 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66417660 kB' 'MemAvailable: 72433740 kB' 'Buffers: 30740 kB' 'Cached: 20095884 kB' 'SwapCached: 0 kB' 'Active: 14949020 kB' 'Inactive: 5750580 kB' 'Active(anon): 14433548 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576800 kB' 'Mapped: 214852 kB' 'Shmem: 13860572 kB' 'KReclaimable: 585948 kB' 'Slab: 1228928 kB' 'SReclaimable: 585948 kB' 'SUnreclaim: 642980 kB' 'KernelStack: 17600 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15743096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215136 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.324 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.325 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 35782344 kB' 'MemUsed: 12282520 kB' 'SwapCached: 0 kB' 'Active: 6914120 kB' 'Inactive: 1198740 kB' 'Active(anon): 6681860 kB' 'Inactive(anon): 0 kB' 'Active(file): 232260 kB' 'Inactive(file): 1198740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7696192 kB' 'Mapped: 96096 kB' 'AnonPages: 419748 kB' 'Shmem: 6265192 kB' 'KernelStack: 10440 kB' 'PageTables: 5564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271084 kB' 'Slab: 611296 kB' 'SReclaimable: 271084 kB' 'SUnreclaim: 340212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.326 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:53.327 node0=1024 expecting 1024 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:53.327 00:04:53.327 real 0m12.005s 00:04:53.327 user 0m3.929s 00:04:53.327 sys 0m8.056s 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:53.327 10:32:19 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:53.327 ************************************ 00:04:53.327 END TEST no_shrink_alloc 00:04:53.327 ************************************ 00:04:53.327 10:32:19 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:04:53.327 10:32:19 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:53.327 10:32:19 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:53.327 10:32:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:53.327 10:32:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:53.327 10:32:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:53.327 10:32:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:53.327 10:32:19 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:53.327 10:32:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:53.327 10:32:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:53.327 10:32:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:53.327 10:32:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:53.327 10:32:19 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:53.327 10:32:19 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:53.327 00:04:53.327 real 0m39.940s 00:04:53.327 user 0m11.733s 00:04:53.327 sys 0m24.614s 00:04:53.327 10:32:19 setup.sh.hugepages -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:53.327 10:32:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:53.327 ************************************ 00:04:53.327 END TEST hugepages 00:04:53.327 ************************************ 00:04:53.327 10:32:19 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:53.327 10:32:19 setup.sh -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:53.327 10:32:19 setup.sh -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:53.327 10:32:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:53.327 ************************************ 00:04:53.327 START TEST driver 00:04:53.327 ************************************ 00:04:53.327 10:32:19 setup.sh.driver -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:53.587 * Looking for test storage... 00:04:53.587 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:53.588 10:32:19 setup.sh.driver -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:53.588 10:32:19 setup.sh.driver -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:53.588 10:32:19 setup.sh.driver -- common/autotest_common.sh@1691 -- # lcov --version 00:04:53.588 10:32:19 setup.sh.driver -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.588 10:32:19 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:04:53.588 10:32:19 setup.sh.driver -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.588 10:32:19 setup.sh.driver -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:53.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.588 --rc genhtml_branch_coverage=1 00:04:53.588 --rc genhtml_function_coverage=1 00:04:53.588 --rc genhtml_legend=1 00:04:53.588 --rc geninfo_all_blocks=1 00:04:53.588 --rc geninfo_unexecuted_blocks=1 00:04:53.588 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:53.588 ' 00:04:53.588 10:32:19 setup.sh.driver -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:53.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.588 --rc genhtml_branch_coverage=1 00:04:53.588 --rc genhtml_function_coverage=1 00:04:53.588 --rc genhtml_legend=1 00:04:53.588 --rc geninfo_all_blocks=1 00:04:53.588 --rc geninfo_unexecuted_blocks=1 00:04:53.588 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:53.588 ' 00:04:53.588 10:32:19 setup.sh.driver -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:53.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.588 --rc genhtml_branch_coverage=1 00:04:53.588 --rc genhtml_function_coverage=1 00:04:53.588 --rc genhtml_legend=1 00:04:53.588 --rc geninfo_all_blocks=1 00:04:53.588 --rc geninfo_unexecuted_blocks=1 00:04:53.588 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:53.588 ' 00:04:53.588 10:32:19 setup.sh.driver -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:53.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.588 --rc genhtml_branch_coverage=1 00:04:53.588 --rc genhtml_function_coverage=1 00:04:53.588 --rc genhtml_legend=1 00:04:53.588 --rc geninfo_all_blocks=1 00:04:53.588 --rc geninfo_unexecuted_blocks=1 00:04:53.588 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:53.588 ' 00:04:53.588 10:32:19 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:53.588 10:32:19 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:53.588 10:32:19 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:01.710 10:32:26 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:01.710 10:32:26 setup.sh.driver -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:01.710 10:32:26 setup.sh.driver -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.710 10:32:26 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:01.710 ************************************ 00:05:01.710 START TEST guess_driver 00:05:01.710 ************************************ 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- common/autotest_common.sh@1127 -- # guess_driver 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 238 > 0 )) 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:01.710 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:01.710 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:01.710 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:01.710 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:01.710 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:01.710 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:01.710 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:01.710 Looking for driver=vfio-pci 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.710 10:32:26 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.246 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.505 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.505 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.505 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.505 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.505 10:32:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:07.819 10:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:07.819 10:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:07.819 10:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.751 10:32:35 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:09.751 10:32:35 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:09.751 10:32:35 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.751 10:32:35 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:17.871 00:05:17.871 real 0m16.181s 00:05:17.871 user 0m3.684s 00:05:17.871 sys 0m8.410s 00:05:17.871 10:32:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:17.871 10:32:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:17.871 ************************************ 00:05:17.871 END TEST guess_driver 00:05:17.871 ************************************ 00:05:17.871 00:05:17.871 real 0m23.396s 00:05:17.871 user 0m5.560s 00:05:17.871 sys 0m12.779s 00:05:17.871 10:32:42 setup.sh.driver -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:17.871 10:32:42 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:17.871 ************************************ 00:05:17.871 END TEST driver 00:05:17.871 ************************************ 00:05:17.871 10:32:42 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:05:17.871 10:32:42 setup.sh -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:17.871 10:32:42 setup.sh -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:17.871 10:32:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:17.871 ************************************ 00:05:17.871 START TEST devices 00:05:17.871 ************************************ 00:05:17.871 10:32:42 setup.sh.devices -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:05:17.871 * Looking for test storage... 00:05:17.871 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:17.871 10:32:42 setup.sh.devices -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:17.871 10:32:42 setup.sh.devices -- common/autotest_common.sh@1691 -- # lcov --version 00:05:17.871 10:32:42 setup.sh.devices -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:17.871 10:32:43 setup.sh.devices -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.871 10:32:43 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:05:17.871 10:32:43 setup.sh.devices -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.871 10:32:43 setup.sh.devices -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:17.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.871 --rc genhtml_branch_coverage=1 00:05:17.871 --rc genhtml_function_coverage=1 00:05:17.871 --rc genhtml_legend=1 00:05:17.871 --rc geninfo_all_blocks=1 00:05:17.871 --rc geninfo_unexecuted_blocks=1 00:05:17.871 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:17.871 ' 00:05:17.871 10:32:43 setup.sh.devices -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:17.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.871 --rc genhtml_branch_coverage=1 00:05:17.871 --rc genhtml_function_coverage=1 00:05:17.871 --rc genhtml_legend=1 00:05:17.871 --rc geninfo_all_blocks=1 00:05:17.871 --rc geninfo_unexecuted_blocks=1 00:05:17.871 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:17.871 ' 00:05:17.871 10:32:43 setup.sh.devices -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:17.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.871 --rc genhtml_branch_coverage=1 00:05:17.871 --rc genhtml_function_coverage=1 00:05:17.871 --rc genhtml_legend=1 00:05:17.871 --rc geninfo_all_blocks=1 00:05:17.871 --rc geninfo_unexecuted_blocks=1 00:05:17.871 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:17.871 ' 00:05:17.871 10:32:43 setup.sh.devices -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:17.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.871 --rc genhtml_branch_coverage=1 00:05:17.871 --rc genhtml_function_coverage=1 00:05:17.871 --rc genhtml_legend=1 00:05:17.871 --rc geninfo_all_blocks=1 00:05:17.871 --rc geninfo_unexecuted_blocks=1 00:05:17.871 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:17.871 ' 00:05:17.871 10:32:43 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:17.871 10:32:43 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:17.871 10:32:43 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.871 10:32:43 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:23.144 10:32:48 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:23.144 10:32:48 setup.sh.devices -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:23.144 10:32:48 setup.sh.devices -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:23.144 10:32:48 setup.sh.devices -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:23.144 10:32:48 setup.sh.devices -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:23.144 10:32:48 setup.sh.devices -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:23.144 10:32:48 setup.sh.devices -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:23.144 10:32:48 setup.sh.devices -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:23.145 10:32:48 setup.sh.devices -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:23.145 10:32:48 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:23.145 10:32:48 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:23.145 10:32:48 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:23.145 10:32:48 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:23.145 10:32:48 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:23.145 10:32:48 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:23.145 10:32:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:23.145 10:32:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:23.145 10:32:48 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:1a:00.0 00:05:23.145 10:32:48 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:05:23.145 10:32:48 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:23.145 10:32:48 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:05:23.145 10:32:48 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:23.145 No valid GPT data, bailing 00:05:23.145 10:32:48 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:23.145 10:32:48 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:05:23.145 10:32:48 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:05:23.145 10:32:48 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:23.145 10:32:48 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:23.145 10:32:48 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:23.145 10:32:48 setup.sh.devices -- setup/common.sh@80 -- # echo 4000787030016 00:05:23.145 10:32:48 setup.sh.devices -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:05:23.145 10:32:48 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:23.145 10:32:48 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:1a:00.0 00:05:23.145 10:32:48 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:23.145 10:32:48 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:23.145 10:32:48 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:23.145 10:32:48 setup.sh.devices -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:23.145 10:32:48 setup.sh.devices -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:23.145 10:32:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:23.145 ************************************ 00:05:23.145 START TEST nvme_mount 00:05:23.145 ************************************ 00:05:23.145 10:32:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1127 -- # nvme_mount 00:05:23.145 10:32:48 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:23.145 10:32:48 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:23.145 10:32:48 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.145 10:32:48 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:23.145 10:32:48 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:23.145 10:32:48 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:23.145 10:32:48 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:23.145 10:32:48 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:23.145 10:32:48 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:23.145 10:32:48 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:23.145 10:32:48 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:23.145 10:32:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:23.145 10:32:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:23.145 10:32:48 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:23.145 10:32:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:23.145 10:32:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:23.145 10:32:48 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:23.145 10:32:48 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:23.145 10:32:48 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:24.080 Creating new GPT entries in memory. 00:05:24.080 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:24.080 other utilities. 00:05:24.080 10:32:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:24.080 10:32:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:24.080 10:32:49 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:24.080 10:32:49 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:24.080 10:32:49 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:25.015 Creating new GPT entries in memory. 00:05:25.015 The operation has completed successfully. 00:05:25.015 10:32:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:25.015 10:32:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:25.015 10:32:50 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2829356 00:05:25.015 10:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.015 10:32:50 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:25.015 10:32:50 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.015 10:32:50 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:25.015 10:32:50 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:25.015 10:32:50 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.015 10:32:51 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:25.015 10:32:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:25.015 10:32:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:25.015 10:32:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.015 10:32:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:25.015 10:32:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:25.015 10:32:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.015 10:32:51 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:25.015 10:32:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:25.015 10:32:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.015 10:32:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:25.015 10:32:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:25.015 10:32:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.015 10:32:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.206 10:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.111 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:31.111 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:31.111 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.111 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:31.111 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:31.111 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:31.111 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.111 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.369 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.369 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:31.369 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:31.369 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:31.369 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:31.627 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:31.627 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:31.627 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:31.627 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:31.627 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:31.627 10:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:31.627 10:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.627 10:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:31.627 10:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:31.627 10:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.627 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:31.627 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:31.627 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:31.627 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.627 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:31.627 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:31.627 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:31.627 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:31.627 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:31.627 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.627 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:31.627 10:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:31.627 10:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.627 10:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:35.816 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:35.816 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:35.816 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:35.816 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.816 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:35.816 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.816 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:35.816 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.816 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:35.816 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.816 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:35.816 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.816 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:35.816 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:35.817 10:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.191 10:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:37.191 10:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:37.191 10:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:37.450 10:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:37.450 10:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:37.450 10:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:37.450 10:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:1a:00.0 data@nvme0n1 '' '' 00:05:37.450 10:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:37.450 10:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:37.450 10:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:37.450 10:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:37.450 10:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:37.450 10:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:37.450 10:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:37.450 10:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.450 10:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:37.450 10:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:37.450 10:33:03 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.450 10:33:03 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:40.742 10:33:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.277 10:33:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:43.277 10:33:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:43.277 10:33:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:43.277 10:33:08 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:43.277 10:33:08 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:43.277 10:33:08 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:43.277 10:33:08 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:43.277 10:33:08 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:43.277 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:43.277 00:05:43.277 real 0m20.004s 00:05:43.277 user 0m5.493s 00:05:43.277 sys 0m11.916s 00:05:43.277 10:33:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:43.277 10:33:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:43.277 ************************************ 00:05:43.277 END TEST nvme_mount 00:05:43.277 ************************************ 00:05:43.277 10:33:08 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:43.277 10:33:08 setup.sh.devices -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:43.277 10:33:08 setup.sh.devices -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:43.277 10:33:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:43.277 ************************************ 00:05:43.277 START TEST dm_mount 00:05:43.278 ************************************ 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@1127 -- # dm_mount 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:43.278 10:33:08 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:44.214 Creating new GPT entries in memory. 00:05:44.214 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:44.214 other utilities. 00:05:44.214 10:33:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:44.214 10:33:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:44.214 10:33:09 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:44.214 10:33:09 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:44.214 10:33:09 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:45.152 Creating new GPT entries in memory. 00:05:45.152 The operation has completed successfully. 00:05:45.152 10:33:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:45.152 10:33:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:45.152 10:33:10 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:45.152 10:33:10 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:45.152 10:33:10 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:46.089 The operation has completed successfully. 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2835238 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:1a:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.089 10:33:12 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:50.392 10:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.312 10:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:52.312 10:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:52.313 10:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:52.313 10:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:52.313 10:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:52.313 10:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:52.313 10:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:1a:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:52.313 10:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:52.313 10:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:52.313 10:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:52.313 10:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:52.313 10:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:52.313 10:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:52.313 10:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:52.313 10:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.313 10:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:52.313 10:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:52.313 10:33:18 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:52.313 10:33:18 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.599 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.600 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.600 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.600 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.600 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.600 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.600 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.600 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.600 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.600 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.600 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.600 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.600 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.600 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.600 10:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.149 10:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:58.149 10:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:58.149 10:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:58.149 10:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:58.149 10:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:58.149 10:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:58.149 10:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:58.149 10:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:58.149 10:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:58.149 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:58.149 10:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:58.149 10:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:58.149 00:05:58.149 real 0m15.036s 00:05:58.149 user 0m3.752s 00:05:58.149 sys 0m8.140s 00:05:58.149 10:33:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:58.149 10:33:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:58.149 ************************************ 00:05:58.149 END TEST dm_mount 00:05:58.149 ************************************ 00:05:58.149 10:33:24 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:58.149 10:33:24 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:58.149 10:33:24 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:58.149 10:33:24 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:58.149 10:33:24 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:58.149 10:33:24 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:58.149 10:33:24 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:58.407 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:58.407 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:58.407 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:58.407 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:58.407 10:33:24 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:58.408 10:33:24 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:58.408 10:33:24 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:58.408 10:33:24 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:58.408 10:33:24 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:58.408 10:33:24 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:58.408 10:33:24 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:58.408 00:05:58.408 real 0m41.494s 00:05:58.408 user 0m11.104s 00:05:58.408 sys 0m24.333s 00:05:58.408 10:33:24 setup.sh.devices -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:58.408 10:33:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:58.408 ************************************ 00:05:58.408 END TEST devices 00:05:58.408 ************************************ 00:05:58.408 00:05:58.408 real 2m24.226s 00:05:58.408 user 0m39.777s 00:05:58.408 sys 1m25.415s 00:05:58.408 10:33:24 setup.sh -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:58.408 10:33:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:58.408 ************************************ 00:05:58.408 END TEST setup.sh 00:05:58.408 ************************************ 00:05:58.408 10:33:24 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:06:02.601 Hugepages 00:06:02.601 node hugesize free / total 00:06:02.601 node0 1048576kB 0 / 0 00:06:02.601 node0 2048kB 1024 / 1024 00:06:02.601 node1 1048576kB 0 / 0 00:06:02.601 node1 2048kB 1024 / 1024 00:06:02.601 00:06:02.601 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:02.601 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:06:02.601 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:06:02.601 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:06:02.601 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:06:02.601 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:06:02.601 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:06:02.601 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:06:02.601 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:06:02.601 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:06:02.601 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:06:02.601 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:06:02.601 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:06:02.601 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:06:02.601 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:06:02.601 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:06:02.601 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:06:02.601 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:06:02.601 10:33:28 -- spdk/autotest.sh@117 -- # uname -s 00:06:02.601 10:33:28 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:02.601 10:33:28 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:02.601 10:33:28 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:05.890 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:05.890 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:06.149 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:06.149 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:06.149 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:06.149 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:06.149 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:06.149 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:06.149 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:06.149 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:06.149 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:06.149 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:06.149 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:06.149 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:06.149 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:06.149 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:09.439 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:06:11.969 10:33:37 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:12.536 10:33:38 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:12.536 10:33:38 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:12.536 10:33:38 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:12.536 10:33:38 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:12.536 10:33:38 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:12.536 10:33:38 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:12.536 10:33:38 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:12.536 10:33:38 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:12.536 10:33:38 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:12.806 10:33:38 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:06:12.806 10:33:38 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:1a:00.0 00:06:12.806 10:33:38 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:16.093 Waiting for block devices as requested 00:06:16.351 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:06:16.351 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:16.610 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:16.610 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:16.610 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:16.868 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:16.868 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:16.868 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:17.127 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:17.127 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:17.127 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:17.385 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:17.385 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:17.385 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:17.643 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:17.643 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:17.643 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:20.171 10:33:45 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:20.171 10:33:45 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:1a:00.0 00:06:20.171 10:33:45 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:06:20.171 10:33:45 -- common/autotest_common.sh@1485 -- # grep 0000:1a:00.0/nvme/nvme 00:06:20.171 10:33:45 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:06:20.171 10:33:45 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 ]] 00:06:20.171 10:33:45 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:06:20.171 10:33:45 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:20.171 10:33:45 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:20.171 10:33:45 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:20.171 10:33:45 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:20.171 10:33:45 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:20.171 10:33:45 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:20.171 10:33:45 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:06:20.171 10:33:45 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:20.171 10:33:45 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:20.171 10:33:45 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:20.171 10:33:45 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:20.171 10:33:45 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:20.171 10:33:45 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:20.171 10:33:45 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:20.171 10:33:45 -- common/autotest_common.sh@1541 -- # continue 00:06:20.171 10:33:45 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:20.171 10:33:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:20.171 10:33:45 -- common/autotest_common.sh@10 -- # set +x 00:06:20.171 10:33:46 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:20.171 10:33:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:20.171 10:33:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.171 10:33:46 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:23.453 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:23.453 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:23.453 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:23.453 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:23.453 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:23.453 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:23.453 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:23.453 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:23.453 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:23.453 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:23.453 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:23.453 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:23.453 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:23.453 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:23.453 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:23.453 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:26.743 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:06:29.273 10:33:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:29.273 10:33:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:29.273 10:33:54 -- common/autotest_common.sh@10 -- # set +x 00:06:29.273 10:33:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:29.273 10:33:54 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:29.273 10:33:54 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:29.273 10:33:54 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:29.273 10:33:54 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:29.274 10:33:54 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:29.274 10:33:54 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:29.274 10:33:54 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:29.274 10:33:54 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:29.274 10:33:54 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:29.274 10:33:54 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:29.274 10:33:54 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:29.274 10:33:54 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:29.274 10:33:55 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:06:29.274 10:33:55 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:1a:00.0 00:06:29.274 10:33:55 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:29.274 10:33:55 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:1a:00.0/device 00:06:29.274 10:33:55 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:06:29.274 10:33:55 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:29.274 10:33:55 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:06:29.274 10:33:55 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:06:29.274 10:33:55 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:1a:00.0 00:06:29.274 10:33:55 -- common/autotest_common.sh@1577 -- # [[ -z 0000:1a:00.0 ]] 00:06:29.274 10:33:55 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=2846268 00:06:29.274 10:33:55 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.274 10:33:55 -- common/autotest_common.sh@1583 -- # waitforlisten 2846268 00:06:29.274 10:33:55 -- common/autotest_common.sh@833 -- # '[' -z 2846268 ']' 00:06:29.274 10:33:55 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.274 10:33:55 -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:29.274 10:33:55 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.274 10:33:55 -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:29.274 10:33:55 -- common/autotest_common.sh@10 -- # set +x 00:06:29.274 [2024-11-05 10:33:55.072012] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:06:29.274 [2024-11-05 10:33:55.072087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846268 ] 00:06:29.274 [2024-11-05 10:33:55.172361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.274 [2024-11-05 10:33:55.228056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.532 10:33:55 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:29.532 10:33:55 -- common/autotest_common.sh@866 -- # return 0 00:06:29.532 10:33:55 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:06:29.532 10:33:55 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:06:29.532 10:33:55 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:1a:00.0 00:06:32.816 nvme0n1 00:06:32.816 10:33:58 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:32.816 [2024-11-05 10:33:58.803682] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:32.816 request: 00:06:32.816 { 00:06:32.816 "nvme_ctrlr_name": "nvme0", 00:06:32.816 "password": "test", 00:06:32.816 "method": "bdev_nvme_opal_revert", 00:06:32.816 "req_id": 1 00:06:32.816 } 00:06:32.816 Got JSON-RPC error response 00:06:32.816 response: 00:06:32.816 { 00:06:32.816 "code": -32602, 00:06:32.816 "message": "Invalid parameters" 00:06:32.816 } 00:06:32.816 10:33:58 -- common/autotest_common.sh@1589 -- # true 00:06:32.816 10:33:58 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:06:32.816 10:33:58 -- common/autotest_common.sh@1593 -- # killprocess 2846268 00:06:32.816 10:33:58 -- common/autotest_common.sh@952 -- # '[' -z 2846268 ']' 00:06:32.816 10:33:58 -- common/autotest_common.sh@956 -- # kill -0 2846268 00:06:32.816 10:33:58 -- common/autotest_common.sh@957 -- # uname 00:06:32.816 10:33:58 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:32.816 10:33:58 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2846268 00:06:32.816 10:33:58 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:32.816 10:33:58 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:32.816 10:33:58 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2846268' 00:06:32.816 killing process with pid 2846268 00:06:32.816 10:33:58 -- common/autotest_common.sh@971 -- # kill 2846268 00:06:32.816 10:33:58 -- common/autotest_common.sh@976 -- # wait 2846268 00:06:36.998 10:34:02 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:36.998 10:34:02 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:36.998 10:34:02 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:36.998 10:34:02 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:36.998 10:34:02 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:36.998 10:34:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:36.998 10:34:02 -- common/autotest_common.sh@10 -- # set +x 00:06:36.998 10:34:02 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:36.998 10:34:02 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:06:36.998 10:34:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:36.998 10:34:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:36.998 10:34:02 -- common/autotest_common.sh@10 -- # set +x 00:06:36.998 ************************************ 00:06:36.998 START TEST env 00:06:36.998 ************************************ 00:06:36.998 10:34:02 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:06:36.998 * Looking for test storage... 00:06:36.998 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:06:36.998 10:34:03 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:36.998 10:34:03 env -- common/autotest_common.sh@1691 -- # lcov --version 00:06:36.998 10:34:03 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:37.258 10:34:03 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:37.258 10:34:03 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.258 10:34:03 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.258 10:34:03 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.258 10:34:03 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.258 10:34:03 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.258 10:34:03 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.258 10:34:03 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.258 10:34:03 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.258 10:34:03 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.258 10:34:03 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.258 10:34:03 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.258 10:34:03 env -- scripts/common.sh@344 -- # case "$op" in 00:06:37.258 10:34:03 env -- scripts/common.sh@345 -- # : 1 00:06:37.258 10:34:03 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.258 10:34:03 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.258 10:34:03 env -- scripts/common.sh@365 -- # decimal 1 00:06:37.258 10:34:03 env -- scripts/common.sh@353 -- # local d=1 00:06:37.258 10:34:03 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.258 10:34:03 env -- scripts/common.sh@355 -- # echo 1 00:06:37.258 10:34:03 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.258 10:34:03 env -- scripts/common.sh@366 -- # decimal 2 00:06:37.258 10:34:03 env -- scripts/common.sh@353 -- # local d=2 00:06:37.258 10:34:03 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.258 10:34:03 env -- scripts/common.sh@355 -- # echo 2 00:06:37.258 10:34:03 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.258 10:34:03 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.258 10:34:03 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.258 10:34:03 env -- scripts/common.sh@368 -- # return 0 00:06:37.258 10:34:03 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.258 10:34:03 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:37.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.258 --rc genhtml_branch_coverage=1 00:06:37.258 --rc genhtml_function_coverage=1 00:06:37.258 --rc genhtml_legend=1 00:06:37.258 --rc geninfo_all_blocks=1 00:06:37.258 --rc geninfo_unexecuted_blocks=1 00:06:37.258 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:37.258 ' 00:06:37.258 10:34:03 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:37.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.258 --rc genhtml_branch_coverage=1 00:06:37.258 --rc genhtml_function_coverage=1 00:06:37.258 --rc genhtml_legend=1 00:06:37.258 --rc geninfo_all_blocks=1 00:06:37.258 --rc geninfo_unexecuted_blocks=1 00:06:37.258 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:37.258 ' 00:06:37.258 10:34:03 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:37.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.258 --rc genhtml_branch_coverage=1 00:06:37.258 --rc genhtml_function_coverage=1 00:06:37.258 --rc genhtml_legend=1 00:06:37.258 --rc geninfo_all_blocks=1 00:06:37.258 --rc geninfo_unexecuted_blocks=1 00:06:37.258 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:37.258 ' 00:06:37.258 10:34:03 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:37.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.258 --rc genhtml_branch_coverage=1 00:06:37.258 --rc genhtml_function_coverage=1 00:06:37.258 --rc genhtml_legend=1 00:06:37.258 --rc geninfo_all_blocks=1 00:06:37.258 --rc geninfo_unexecuted_blocks=1 00:06:37.258 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:37.258 ' 00:06:37.258 10:34:03 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:06:37.258 10:34:03 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:37.258 10:34:03 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:37.258 10:34:03 env -- common/autotest_common.sh@10 -- # set +x 00:06:37.258 ************************************ 00:06:37.258 START TEST env_memory 00:06:37.258 ************************************ 00:06:37.258 10:34:03 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:06:37.258 00:06:37.258 00:06:37.258 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.258 http://cunit.sourceforge.net/ 00:06:37.258 00:06:37.258 00:06:37.258 Suite: memory 00:06:37.258 Test: alloc and free memory map ...[2024-11-05 10:34:03.151699] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:37.258 passed 00:06:37.258 Test: mem map translation ...[2024-11-05 10:34:03.170936] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:37.258 [2024-11-05 10:34:03.170958] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:37.258 [2024-11-05 10:34:03.171002] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:37.258 [2024-11-05 10:34:03.171014] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:37.258 passed 00:06:37.258 Test: mem map registration ...[2024-11-05 10:34:03.203589] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:37.258 [2024-11-05 10:34:03.203609] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:37.258 passed 00:06:37.258 Test: mem map adjacent registrations ...passed 00:06:37.258 00:06:37.258 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.258 suites 1 1 n/a 0 0 00:06:37.258 tests 4 4 4 0 0 00:06:37.258 asserts 152 152 152 0 n/a 00:06:37.258 00:06:37.258 Elapsed time = 0.120 seconds 00:06:37.258 00:06:37.258 real 0m0.134s 00:06:37.258 user 0m0.119s 00:06:37.258 sys 0m0.014s 00:06:37.258 10:34:03 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:37.258 10:34:03 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:37.258 ************************************ 00:06:37.258 END TEST env_memory 00:06:37.258 ************************************ 00:06:37.258 10:34:03 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:37.258 10:34:03 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:37.258 10:34:03 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:37.258 10:34:03 env -- common/autotest_common.sh@10 -- # set +x 00:06:37.258 ************************************ 00:06:37.258 START TEST env_vtophys 00:06:37.258 ************************************ 00:06:37.258 10:34:03 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:37.258 EAL: lib.eal log level changed from notice to debug 00:06:37.258 EAL: Detected lcore 0 as core 0 on socket 0 00:06:37.258 EAL: Detected lcore 1 as core 1 on socket 0 00:06:37.258 EAL: Detected lcore 2 as core 2 on socket 0 00:06:37.258 EAL: Detected lcore 3 as core 3 on socket 0 00:06:37.258 EAL: Detected lcore 4 as core 4 on socket 0 00:06:37.258 EAL: Detected lcore 5 as core 8 on socket 0 00:06:37.258 EAL: Detected lcore 6 as core 9 on socket 0 00:06:37.258 EAL: Detected lcore 7 as core 10 on socket 0 00:06:37.258 EAL: Detected lcore 8 as core 11 on socket 0 00:06:37.258 EAL: Detected lcore 9 as core 16 on socket 0 00:06:37.258 EAL: Detected lcore 10 as core 17 on socket 0 00:06:37.258 EAL: Detected lcore 11 as core 18 on socket 0 00:06:37.258 EAL: Detected lcore 12 as core 19 on socket 0 00:06:37.258 EAL: Detected lcore 13 as core 20 on socket 0 00:06:37.258 EAL: Detected lcore 14 as core 24 on socket 0 00:06:37.258 EAL: Detected lcore 15 as core 25 on socket 0 00:06:37.258 EAL: Detected lcore 16 as core 26 on socket 0 00:06:37.258 EAL: Detected lcore 17 as core 27 on socket 0 00:06:37.258 EAL: Detected lcore 18 as core 0 on socket 1 00:06:37.258 EAL: Detected lcore 19 as core 1 on socket 1 00:06:37.258 EAL: Detected lcore 20 as core 2 on socket 1 00:06:37.258 EAL: Detected lcore 21 as core 3 on socket 1 00:06:37.258 EAL: Detected lcore 22 as core 4 on socket 1 00:06:37.259 EAL: Detected lcore 23 as core 8 on socket 1 00:06:37.259 EAL: Detected lcore 24 as core 9 on socket 1 00:06:37.259 EAL: Detected lcore 25 as core 10 on socket 1 00:06:37.259 EAL: Detected lcore 26 as core 11 on socket 1 00:06:37.259 EAL: Detected lcore 27 as core 16 on socket 1 00:06:37.259 EAL: Detected lcore 28 as core 17 on socket 1 00:06:37.259 EAL: Detected lcore 29 as core 18 on socket 1 00:06:37.259 EAL: Detected lcore 30 as core 19 on socket 1 00:06:37.259 EAL: Detected lcore 31 as core 20 on socket 1 00:06:37.259 EAL: Detected lcore 32 as core 24 on socket 1 00:06:37.259 EAL: Detected lcore 33 as core 25 on socket 1 00:06:37.259 EAL: Detected lcore 34 as core 26 on socket 1 00:06:37.259 EAL: Detected lcore 35 as core 27 on socket 1 00:06:37.259 EAL: Detected lcore 36 as core 0 on socket 0 00:06:37.259 EAL: Detected lcore 37 as core 1 on socket 0 00:06:37.259 EAL: Detected lcore 38 as core 2 on socket 0 00:06:37.259 EAL: Detected lcore 39 as core 3 on socket 0 00:06:37.259 EAL: Detected lcore 40 as core 4 on socket 0 00:06:37.259 EAL: Detected lcore 41 as core 8 on socket 0 00:06:37.259 EAL: Detected lcore 42 as core 9 on socket 0 00:06:37.259 EAL: Detected lcore 43 as core 10 on socket 0 00:06:37.259 EAL: Detected lcore 44 as core 11 on socket 0 00:06:37.259 EAL: Detected lcore 45 as core 16 on socket 0 00:06:37.259 EAL: Detected lcore 46 as core 17 on socket 0 00:06:37.259 EAL: Detected lcore 47 as core 18 on socket 0 00:06:37.259 EAL: Detected lcore 48 as core 19 on socket 0 00:06:37.259 EAL: Detected lcore 49 as core 20 on socket 0 00:06:37.259 EAL: Detected lcore 50 as core 24 on socket 0 00:06:37.259 EAL: Detected lcore 51 as core 25 on socket 0 00:06:37.259 EAL: Detected lcore 52 as core 26 on socket 0 00:06:37.259 EAL: Detected lcore 53 as core 27 on socket 0 00:06:37.259 EAL: Detected lcore 54 as core 0 on socket 1 00:06:37.259 EAL: Detected lcore 55 as core 1 on socket 1 00:06:37.259 EAL: Detected lcore 56 as core 2 on socket 1 00:06:37.259 EAL: Detected lcore 57 as core 3 on socket 1 00:06:37.259 EAL: Detected lcore 58 as core 4 on socket 1 00:06:37.259 EAL: Detected lcore 59 as core 8 on socket 1 00:06:37.259 EAL: Detected lcore 60 as core 9 on socket 1 00:06:37.259 EAL: Detected lcore 61 as core 10 on socket 1 00:06:37.259 EAL: Detected lcore 62 as core 11 on socket 1 00:06:37.259 EAL: Detected lcore 63 as core 16 on socket 1 00:06:37.259 EAL: Detected lcore 64 as core 17 on socket 1 00:06:37.259 EAL: Detected lcore 65 as core 18 on socket 1 00:06:37.259 EAL: Detected lcore 66 as core 19 on socket 1 00:06:37.259 EAL: Detected lcore 67 as core 20 on socket 1 00:06:37.259 EAL: Detected lcore 68 as core 24 on socket 1 00:06:37.259 EAL: Detected lcore 69 as core 25 on socket 1 00:06:37.259 EAL: Detected lcore 70 as core 26 on socket 1 00:06:37.259 EAL: Detected lcore 71 as core 27 on socket 1 00:06:37.518 EAL: Maximum logical cores by configuration: 128 00:06:37.518 EAL: Detected CPU lcores: 72 00:06:37.518 EAL: Detected NUMA nodes: 2 00:06:37.518 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:37.518 EAL: Checking presence of .so 'librte_eal.so.24' 00:06:37.518 EAL: Checking presence of .so 'librte_eal.so' 00:06:37.518 EAL: Detected static linkage of DPDK 00:06:37.518 EAL: No shared files mode enabled, IPC will be disabled 00:06:37.518 EAL: Bus pci wants IOVA as 'DC' 00:06:37.518 EAL: Buses did not request a specific IOVA mode. 00:06:37.518 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:37.518 EAL: Selected IOVA mode 'VA' 00:06:37.518 EAL: Probing VFIO support... 00:06:37.518 EAL: IOMMU type 1 (Type 1) is supported 00:06:37.518 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:37.518 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:37.518 EAL: VFIO support initialized 00:06:37.518 EAL: Ask a virtual area of 0x2e000 bytes 00:06:37.518 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:37.518 EAL: Setting up physically contiguous memory... 00:06:37.518 EAL: Setting maximum number of open files to 524288 00:06:37.518 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:37.518 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:37.518 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:37.518 EAL: Ask a virtual area of 0x61000 bytes 00:06:37.518 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:37.518 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:37.518 EAL: Ask a virtual area of 0x400000000 bytes 00:06:37.518 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:37.518 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:37.518 EAL: Ask a virtual area of 0x61000 bytes 00:06:37.518 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:37.518 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:37.518 EAL: Ask a virtual area of 0x400000000 bytes 00:06:37.518 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:37.518 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:37.518 EAL: Ask a virtual area of 0x61000 bytes 00:06:37.518 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:37.518 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:37.518 EAL: Ask a virtual area of 0x400000000 bytes 00:06:37.518 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:37.518 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:37.518 EAL: Ask a virtual area of 0x61000 bytes 00:06:37.518 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:37.518 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:37.518 EAL: Ask a virtual area of 0x400000000 bytes 00:06:37.518 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:37.518 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:37.518 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:37.518 EAL: Ask a virtual area of 0x61000 bytes 00:06:37.518 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:37.518 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:37.518 EAL: Ask a virtual area of 0x400000000 bytes 00:06:37.518 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:37.518 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:37.518 EAL: Ask a virtual area of 0x61000 bytes 00:06:37.518 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:37.518 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:37.518 EAL: Ask a virtual area of 0x400000000 bytes 00:06:37.518 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:37.518 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:37.518 EAL: Ask a virtual area of 0x61000 bytes 00:06:37.518 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:37.518 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:37.518 EAL: Ask a virtual area of 0x400000000 bytes 00:06:37.518 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:37.518 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:37.518 EAL: Ask a virtual area of 0x61000 bytes 00:06:37.518 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:37.518 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:37.518 EAL: Ask a virtual area of 0x400000000 bytes 00:06:37.518 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:37.518 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:37.518 EAL: Hugepages will be freed exactly as allocated. 00:06:37.518 EAL: No shared files mode enabled, IPC is disabled 00:06:37.518 EAL: No shared files mode enabled, IPC is disabled 00:06:37.518 EAL: TSC frequency is ~2300000 KHz 00:06:37.518 EAL: Main lcore 0 is ready (tid=7f0ea69caa00;cpuset=[0]) 00:06:37.518 EAL: Trying to obtain current memory policy. 00:06:37.518 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.518 EAL: Restoring previous memory policy: 0 00:06:37.519 EAL: request: mp_malloc_sync 00:06:37.519 EAL: No shared files mode enabled, IPC is disabled 00:06:37.519 EAL: Heap on socket 0 was expanded by 2MB 00:06:37.519 EAL: No shared files mode enabled, IPC is disabled 00:06:37.519 EAL: Mem event callback 'spdk:(nil)' registered 00:06:37.519 00:06:37.519 00:06:37.519 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.519 http://cunit.sourceforge.net/ 00:06:37.519 00:06:37.519 00:06:37.519 Suite: components_suite 00:06:37.519 Test: vtophys_malloc_test ...passed 00:06:37.519 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:37.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.519 EAL: Restoring previous memory policy: 4 00:06:37.519 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.519 EAL: request: mp_malloc_sync 00:06:37.519 EAL: No shared files mode enabled, IPC is disabled 00:06:37.519 EAL: Heap on socket 0 was expanded by 4MB 00:06:37.519 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.519 EAL: request: mp_malloc_sync 00:06:37.519 EAL: No shared files mode enabled, IPC is disabled 00:06:37.519 EAL: Heap on socket 0 was shrunk by 4MB 00:06:37.519 EAL: Trying to obtain current memory policy. 00:06:37.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.519 EAL: Restoring previous memory policy: 4 00:06:37.519 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.519 EAL: request: mp_malloc_sync 00:06:37.519 EAL: No shared files mode enabled, IPC is disabled 00:06:37.519 EAL: Heap on socket 0 was expanded by 6MB 00:06:37.519 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.519 EAL: request: mp_malloc_sync 00:06:37.519 EAL: No shared files mode enabled, IPC is disabled 00:06:37.519 EAL: Heap on socket 0 was shrunk by 6MB 00:06:37.519 EAL: Trying to obtain current memory policy. 00:06:37.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.519 EAL: Restoring previous memory policy: 4 00:06:37.519 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.519 EAL: request: mp_malloc_sync 00:06:37.519 EAL: No shared files mode enabled, IPC is disabled 00:06:37.519 EAL: Heap on socket 0 was expanded by 10MB 00:06:37.519 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.519 EAL: request: mp_malloc_sync 00:06:37.519 EAL: No shared files mode enabled, IPC is disabled 00:06:37.519 EAL: Heap on socket 0 was shrunk by 10MB 00:06:37.519 EAL: Trying to obtain current memory policy. 00:06:37.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.519 EAL: Restoring previous memory policy: 4 00:06:37.519 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.519 EAL: request: mp_malloc_sync 00:06:37.519 EAL: No shared files mode enabled, IPC is disabled 00:06:37.519 EAL: Heap on socket 0 was expanded by 18MB 00:06:37.519 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.519 EAL: request: mp_malloc_sync 00:06:37.519 EAL: No shared files mode enabled, IPC is disabled 00:06:37.519 EAL: Heap on socket 0 was shrunk by 18MB 00:06:37.519 EAL: Trying to obtain current memory policy. 00:06:37.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.519 EAL: Restoring previous memory policy: 4 00:06:37.519 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.519 EAL: request: mp_malloc_sync 00:06:37.519 EAL: No shared files mode enabled, IPC is disabled 00:06:37.519 EAL: Heap on socket 0 was expanded by 34MB 00:06:37.519 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.519 EAL: request: mp_malloc_sync 00:06:37.519 EAL: No shared files mode enabled, IPC is disabled 00:06:37.519 EAL: Heap on socket 0 was shrunk by 34MB 00:06:37.519 EAL: Trying to obtain current memory policy. 00:06:37.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.519 EAL: Restoring previous memory policy: 4 00:06:37.519 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.519 EAL: request: mp_malloc_sync 00:06:37.519 EAL: No shared files mode enabled, IPC is disabled 00:06:37.519 EAL: Heap on socket 0 was expanded by 66MB 00:06:37.519 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.519 EAL: request: mp_malloc_sync 00:06:37.519 EAL: No shared files mode enabled, IPC is disabled 00:06:37.519 EAL: Heap on socket 0 was shrunk by 66MB 00:06:37.519 EAL: Trying to obtain current memory policy. 00:06:37.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.519 EAL: Restoring previous memory policy: 4 00:06:37.519 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.519 EAL: request: mp_malloc_sync 00:06:37.519 EAL: No shared files mode enabled, IPC is disabled 00:06:37.519 EAL: Heap on socket 0 was expanded by 130MB 00:06:37.519 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.519 EAL: request: mp_malloc_sync 00:06:37.519 EAL: No shared files mode enabled, IPC is disabled 00:06:37.519 EAL: Heap on socket 0 was shrunk by 130MB 00:06:37.519 EAL: Trying to obtain current memory policy. 00:06:37.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.778 EAL: Restoring previous memory policy: 4 00:06:37.778 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.778 EAL: request: mp_malloc_sync 00:06:37.778 EAL: No shared files mode enabled, IPC is disabled 00:06:37.778 EAL: Heap on socket 0 was expanded by 258MB 00:06:37.778 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.778 EAL: request: mp_malloc_sync 00:06:37.778 EAL: No shared files mode enabled, IPC is disabled 00:06:37.778 EAL: Heap on socket 0 was shrunk by 258MB 00:06:37.778 EAL: Trying to obtain current memory policy. 00:06:37.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.778 EAL: Restoring previous memory policy: 4 00:06:37.778 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.778 EAL: request: mp_malloc_sync 00:06:37.778 EAL: No shared files mode enabled, IPC is disabled 00:06:37.778 EAL: Heap on socket 0 was expanded by 514MB 00:06:38.037 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.037 EAL: request: mp_malloc_sync 00:06:38.037 EAL: No shared files mode enabled, IPC is disabled 00:06:38.037 EAL: Heap on socket 0 was shrunk by 514MB 00:06:38.037 EAL: Trying to obtain current memory policy. 00:06:38.037 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.296 EAL: Restoring previous memory policy: 4 00:06:38.296 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.296 EAL: request: mp_malloc_sync 00:06:38.296 EAL: No shared files mode enabled, IPC is disabled 00:06:38.296 EAL: Heap on socket 0 was expanded by 1026MB 00:06:38.555 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.815 EAL: request: mp_malloc_sync 00:06:38.815 EAL: No shared files mode enabled, IPC is disabled 00:06:38.815 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:38.815 passed 00:06:38.815 00:06:38.815 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.815 suites 1 1 n/a 0 0 00:06:38.815 tests 2 2 2 0 0 00:06:38.815 asserts 497 497 497 0 n/a 00:06:38.815 00:06:38.815 Elapsed time = 1.155 seconds 00:06:38.815 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.815 EAL: request: mp_malloc_sync 00:06:38.815 EAL: No shared files mode enabled, IPC is disabled 00:06:38.815 EAL: Heap on socket 0 was shrunk by 2MB 00:06:38.815 EAL: No shared files mode enabled, IPC is disabled 00:06:38.815 EAL: No shared files mode enabled, IPC is disabled 00:06:38.815 EAL: No shared files mode enabled, IPC is disabled 00:06:38.815 00:06:38.815 real 0m1.329s 00:06:38.815 user 0m0.756s 00:06:38.815 sys 0m0.545s 00:06:38.815 10:34:04 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:38.815 10:34:04 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:38.815 ************************************ 00:06:38.815 END TEST env_vtophys 00:06:38.815 ************************************ 00:06:38.815 10:34:04 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:06:38.815 10:34:04 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:38.815 10:34:04 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:38.815 10:34:04 env -- common/autotest_common.sh@10 -- # set +x 00:06:38.815 ************************************ 00:06:38.815 START TEST env_pci 00:06:38.815 ************************************ 00:06:38.815 10:34:04 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:06:38.815 00:06:38.815 00:06:38.815 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.815 http://cunit.sourceforge.net/ 00:06:38.815 00:06:38.815 00:06:38.815 Suite: pci 00:06:38.815 Test: pci_hook ...[2024-11-05 10:34:04.727524] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1118:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2847585 has claimed it 00:06:38.815 EAL: Cannot find device (10000:00:01.0) 00:06:38.815 EAL: Failed to attach device on primary process 00:06:38.815 passed 00:06:38.815 00:06:38.815 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.815 suites 1 1 n/a 0 0 00:06:38.815 tests 1 1 1 0 0 00:06:38.815 asserts 25 25 25 0 n/a 00:06:38.815 00:06:38.815 Elapsed time = 0.047 seconds 00:06:38.815 00:06:38.815 real 0m0.068s 00:06:38.815 user 0m0.016s 00:06:38.815 sys 0m0.051s 00:06:38.815 10:34:04 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:38.815 10:34:04 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:38.815 ************************************ 00:06:38.815 END TEST env_pci 00:06:38.815 ************************************ 00:06:38.815 10:34:04 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:38.815 10:34:04 env -- env/env.sh@15 -- # uname 00:06:38.815 10:34:04 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:38.815 10:34:04 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:38.815 10:34:04 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:38.815 10:34:04 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:38.815 10:34:04 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:38.815 10:34:04 env -- common/autotest_common.sh@10 -- # set +x 00:06:38.815 ************************************ 00:06:38.815 START TEST env_dpdk_post_init 00:06:38.815 ************************************ 00:06:38.815 10:34:04 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:38.815 EAL: Detected CPU lcores: 72 00:06:38.815 EAL: Detected NUMA nodes: 2 00:06:38.815 EAL: Detected static linkage of DPDK 00:06:38.815 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:39.074 EAL: Selected IOVA mode 'VA' 00:06:39.074 EAL: VFIO support initialized 00:06:39.074 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:39.074 EAL: Using IOMMU type 1 (Type 1) 00:06:40.012 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:1a:00.0 (socket 0) 00:06:45.284 EAL: Releasing PCI mapped resource for 0000:1a:00.0 00:06:45.284 EAL: Calling pci_unmap_resource for 0000:1a:00.0 at 0x202001000000 00:06:45.543 Starting DPDK initialization... 00:06:45.543 Starting SPDK post initialization... 00:06:45.543 SPDK NVMe probe 00:06:45.543 Attaching to 0000:1a:00.0 00:06:45.543 Attached to 0000:1a:00.0 00:06:45.543 Cleaning up... 00:06:45.543 00:06:45.543 real 0m6.606s 00:06:45.543 user 0m4.726s 00:06:45.543 sys 0m1.131s 00:06:45.544 10:34:11 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.544 10:34:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:45.544 ************************************ 00:06:45.544 END TEST env_dpdk_post_init 00:06:45.544 ************************************ 00:06:45.544 10:34:11 env -- env/env.sh@26 -- # uname 00:06:45.544 10:34:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:45.544 10:34:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:45.544 10:34:11 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:45.544 10:34:11 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.544 10:34:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:45.544 ************************************ 00:06:45.544 START TEST env_mem_callbacks 00:06:45.544 ************************************ 00:06:45.544 10:34:11 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:45.544 EAL: Detected CPU lcores: 72 00:06:45.544 EAL: Detected NUMA nodes: 2 00:06:45.544 EAL: Detected static linkage of DPDK 00:06:45.544 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:45.544 EAL: Selected IOVA mode 'VA' 00:06:45.544 EAL: VFIO support initialized 00:06:45.544 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:45.544 00:06:45.544 00:06:45.544 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.544 http://cunit.sourceforge.net/ 00:06:45.544 00:06:45.544 00:06:45.544 Suite: memory 00:06:45.544 Test: test ... 00:06:45.544 register 0x200000200000 2097152 00:06:45.544 malloc 3145728 00:06:45.544 register 0x200000400000 4194304 00:06:45.544 buf 0x200000500000 len 3145728 PASSED 00:06:45.544 malloc 64 00:06:45.544 buf 0x2000004fff40 len 64 PASSED 00:06:45.544 malloc 4194304 00:06:45.544 register 0x200000800000 6291456 00:06:45.544 buf 0x200000a00000 len 4194304 PASSED 00:06:45.544 free 0x200000500000 3145728 00:06:45.544 free 0x2000004fff40 64 00:06:45.544 unregister 0x200000400000 4194304 PASSED 00:06:45.544 free 0x200000a00000 4194304 00:06:45.544 unregister 0x200000800000 6291456 PASSED 00:06:45.544 malloc 8388608 00:06:45.544 register 0x200000400000 10485760 00:06:45.544 buf 0x200000600000 len 8388608 PASSED 00:06:45.544 free 0x200000600000 8388608 00:06:45.544 unregister 0x200000400000 10485760 PASSED 00:06:45.544 passed 00:06:45.544 00:06:45.544 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.544 suites 1 1 n/a 0 0 00:06:45.544 tests 1 1 1 0 0 00:06:45.544 asserts 15 15 15 0 n/a 00:06:45.544 00:06:45.544 Elapsed time = 0.008 seconds 00:06:45.803 00:06:45.803 real 0m0.093s 00:06:45.803 user 0m0.018s 00:06:45.803 sys 0m0.075s 00:06:45.803 10:34:11 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.803 10:34:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:45.803 ************************************ 00:06:45.803 END TEST env_mem_callbacks 00:06:45.803 ************************************ 00:06:45.803 00:06:45.803 real 0m8.747s 00:06:45.803 user 0m5.849s 00:06:45.803 sys 0m2.161s 00:06:45.803 10:34:11 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.803 10:34:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:45.803 ************************************ 00:06:45.803 END TEST env 00:06:45.803 ************************************ 00:06:45.803 10:34:11 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:06:45.803 10:34:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:45.803 10:34:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.803 10:34:11 -- common/autotest_common.sh@10 -- # set +x 00:06:45.803 ************************************ 00:06:45.803 START TEST rpc 00:06:45.803 ************************************ 00:06:45.803 10:34:11 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:06:45.803 * Looking for test storage... 00:06:45.803 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:45.803 10:34:11 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:45.803 10:34:11 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:45.803 10:34:11 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:46.063 10:34:11 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:46.063 10:34:11 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.063 10:34:11 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.063 10:34:11 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.063 10:34:11 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.063 10:34:11 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.063 10:34:11 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.063 10:34:11 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.063 10:34:11 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.063 10:34:11 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.063 10:34:11 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.063 10:34:11 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.063 10:34:11 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:46.063 10:34:11 rpc -- scripts/common.sh@345 -- # : 1 00:06:46.063 10:34:11 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.063 10:34:11 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.063 10:34:11 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:46.063 10:34:11 rpc -- scripts/common.sh@353 -- # local d=1 00:06:46.063 10:34:11 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.063 10:34:11 rpc -- scripts/common.sh@355 -- # echo 1 00:06:46.063 10:34:11 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.063 10:34:11 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:46.063 10:34:11 rpc -- scripts/common.sh@353 -- # local d=2 00:06:46.063 10:34:11 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.063 10:34:11 rpc -- scripts/common.sh@355 -- # echo 2 00:06:46.063 10:34:11 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.063 10:34:11 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.063 10:34:11 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.063 10:34:11 rpc -- scripts/common.sh@368 -- # return 0 00:06:46.063 10:34:11 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.063 10:34:11 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:46.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.063 --rc genhtml_branch_coverage=1 00:06:46.063 --rc genhtml_function_coverage=1 00:06:46.063 --rc genhtml_legend=1 00:06:46.063 --rc geninfo_all_blocks=1 00:06:46.063 --rc geninfo_unexecuted_blocks=1 00:06:46.063 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.063 ' 00:06:46.063 10:34:11 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:46.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.063 --rc genhtml_branch_coverage=1 00:06:46.063 --rc genhtml_function_coverage=1 00:06:46.063 --rc genhtml_legend=1 00:06:46.063 --rc geninfo_all_blocks=1 00:06:46.063 --rc geninfo_unexecuted_blocks=1 00:06:46.063 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.063 ' 00:06:46.063 10:34:11 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:46.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.063 --rc genhtml_branch_coverage=1 00:06:46.063 --rc genhtml_function_coverage=1 00:06:46.063 --rc genhtml_legend=1 00:06:46.063 --rc geninfo_all_blocks=1 00:06:46.063 --rc geninfo_unexecuted_blocks=1 00:06:46.063 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.063 ' 00:06:46.063 10:34:11 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:46.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.063 --rc genhtml_branch_coverage=1 00:06:46.063 --rc genhtml_function_coverage=1 00:06:46.063 --rc genhtml_legend=1 00:06:46.063 --rc geninfo_all_blocks=1 00:06:46.063 --rc geninfo_unexecuted_blocks=1 00:06:46.063 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.063 ' 00:06:46.063 10:34:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2848737 00:06:46.063 10:34:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:46.064 10:34:11 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:46.064 10:34:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2848737 00:06:46.064 10:34:11 rpc -- common/autotest_common.sh@833 -- # '[' -z 2848737 ']' 00:06:46.064 10:34:11 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.064 10:34:11 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:46.064 10:34:11 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.064 10:34:11 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:46.064 10:34:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.064 [2024-11-05 10:34:11.956855] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:06:46.064 [2024-11-05 10:34:11.956906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848737 ] 00:06:46.064 [2024-11-05 10:34:12.062379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.064 [2024-11-05 10:34:12.118803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:46.064 [2024-11-05 10:34:12.118853] app.c: 616:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2848737' to capture a snapshot of events at runtime. 00:06:46.064 [2024-11-05 10:34:12.118867] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.064 [2024-11-05 10:34:12.118880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.064 [2024-11-05 10:34:12.118890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2848737 for offline analysis/debug. 00:06:46.064 [2024-11-05 10:34:12.119487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.323 10:34:12 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:46.323 10:34:12 rpc -- common/autotest_common.sh@866 -- # return 0 00:06:46.323 10:34:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:46.323 10:34:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:46.323 10:34:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:46.323 10:34:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:46.323 10:34:12 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:46.323 10:34:12 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:46.323 10:34:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.581 ************************************ 00:06:46.581 START TEST rpc_integrity 00:06:46.581 ************************************ 00:06:46.581 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:06:46.581 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:46.581 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.581 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.581 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.581 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:46.581 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:46.581 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:46.581 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.582 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:46.582 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.582 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:46.582 { 00:06:46.582 "name": "Malloc0", 00:06:46.582 "aliases": [ 00:06:46.582 "108f998b-9425-4462-9def-b5be1f421382" 00:06:46.582 ], 00:06:46.582 "product_name": "Malloc disk", 00:06:46.582 "block_size": 512, 00:06:46.582 "num_blocks": 16384, 00:06:46.582 "uuid": "108f998b-9425-4462-9def-b5be1f421382", 00:06:46.582 "assigned_rate_limits": { 00:06:46.582 "rw_ios_per_sec": 0, 00:06:46.582 "rw_mbytes_per_sec": 0, 00:06:46.582 "r_mbytes_per_sec": 0, 00:06:46.582 "w_mbytes_per_sec": 0 00:06:46.582 }, 00:06:46.582 "claimed": false, 00:06:46.582 "zoned": false, 00:06:46.582 "supported_io_types": { 00:06:46.582 "read": true, 00:06:46.582 "write": true, 00:06:46.582 "unmap": true, 00:06:46.582 "flush": true, 00:06:46.582 "reset": true, 00:06:46.582 "nvme_admin": false, 00:06:46.582 "nvme_io": false, 00:06:46.582 "nvme_io_md": false, 00:06:46.582 "write_zeroes": true, 00:06:46.582 "zcopy": true, 00:06:46.582 "get_zone_info": false, 00:06:46.582 "zone_management": false, 00:06:46.582 "zone_append": false, 00:06:46.582 "compare": false, 00:06:46.582 "compare_and_write": false, 00:06:46.582 "abort": true, 00:06:46.582 "seek_hole": false, 00:06:46.582 "seek_data": false, 00:06:46.582 "copy": true, 00:06:46.582 "nvme_iov_md": false 00:06:46.582 }, 00:06:46.582 "memory_domains": [ 00:06:46.582 { 00:06:46.582 "dma_device_id": "system", 00:06:46.582 "dma_device_type": 1 00:06:46.582 }, 00:06:46.582 { 00:06:46.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.582 "dma_device_type": 2 00:06:46.582 } 00:06:46.582 ], 00:06:46.582 "driver_specific": {} 00:06:46.582 } 00:06:46.582 ]' 00:06:46.582 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:46.582 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:46.582 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.582 [2024-11-05 10:34:12.517018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:46.582 [2024-11-05 10:34:12.517059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:46.582 [2024-11-05 10:34:12.517084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5127d10 00:06:46.582 [2024-11-05 10:34:12.517098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:46.582 [2024-11-05 10:34:12.518370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:46.582 [2024-11-05 10:34:12.518399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:46.582 Passthru0 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.582 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.582 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:46.582 { 00:06:46.582 "name": "Malloc0", 00:06:46.582 "aliases": [ 00:06:46.582 "108f998b-9425-4462-9def-b5be1f421382" 00:06:46.582 ], 00:06:46.582 "product_name": "Malloc disk", 00:06:46.582 "block_size": 512, 00:06:46.582 "num_blocks": 16384, 00:06:46.582 "uuid": "108f998b-9425-4462-9def-b5be1f421382", 00:06:46.582 "assigned_rate_limits": { 00:06:46.582 "rw_ios_per_sec": 0, 00:06:46.582 "rw_mbytes_per_sec": 0, 00:06:46.582 "r_mbytes_per_sec": 0, 00:06:46.582 "w_mbytes_per_sec": 0 00:06:46.582 }, 00:06:46.582 "claimed": true, 00:06:46.582 "claim_type": "exclusive_write", 00:06:46.582 "zoned": false, 00:06:46.582 "supported_io_types": { 00:06:46.582 "read": true, 00:06:46.582 "write": true, 00:06:46.582 "unmap": true, 00:06:46.582 "flush": true, 00:06:46.582 "reset": true, 00:06:46.582 "nvme_admin": false, 00:06:46.582 "nvme_io": false, 00:06:46.582 "nvme_io_md": false, 00:06:46.582 "write_zeroes": true, 00:06:46.582 "zcopy": true, 00:06:46.582 "get_zone_info": false, 00:06:46.582 "zone_management": false, 00:06:46.582 "zone_append": false, 00:06:46.582 "compare": false, 00:06:46.582 "compare_and_write": false, 00:06:46.582 "abort": true, 00:06:46.582 "seek_hole": false, 00:06:46.582 "seek_data": false, 00:06:46.582 "copy": true, 00:06:46.582 "nvme_iov_md": false 00:06:46.582 }, 00:06:46.582 "memory_domains": [ 00:06:46.582 { 00:06:46.582 "dma_device_id": "system", 00:06:46.582 "dma_device_type": 1 00:06:46.582 }, 00:06:46.582 { 00:06:46.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.582 "dma_device_type": 2 00:06:46.582 } 00:06:46.582 ], 00:06:46.582 "driver_specific": {} 00:06:46.582 }, 00:06:46.582 { 00:06:46.582 "name": "Passthru0", 00:06:46.582 "aliases": [ 00:06:46.582 "5ac4d0aa-d934-5822-bd85-44b991d91df6" 00:06:46.582 ], 00:06:46.582 "product_name": "passthru", 00:06:46.582 "block_size": 512, 00:06:46.582 "num_blocks": 16384, 00:06:46.582 "uuid": "5ac4d0aa-d934-5822-bd85-44b991d91df6", 00:06:46.582 "assigned_rate_limits": { 00:06:46.582 "rw_ios_per_sec": 0, 00:06:46.582 "rw_mbytes_per_sec": 0, 00:06:46.582 "r_mbytes_per_sec": 0, 00:06:46.582 "w_mbytes_per_sec": 0 00:06:46.582 }, 00:06:46.582 "claimed": false, 00:06:46.582 "zoned": false, 00:06:46.582 "supported_io_types": { 00:06:46.582 "read": true, 00:06:46.582 "write": true, 00:06:46.582 "unmap": true, 00:06:46.582 "flush": true, 00:06:46.582 "reset": true, 00:06:46.582 "nvme_admin": false, 00:06:46.582 "nvme_io": false, 00:06:46.582 "nvme_io_md": false, 00:06:46.582 "write_zeroes": true, 00:06:46.582 "zcopy": true, 00:06:46.582 "get_zone_info": false, 00:06:46.582 "zone_management": false, 00:06:46.582 "zone_append": false, 00:06:46.582 "compare": false, 00:06:46.582 "compare_and_write": false, 00:06:46.582 "abort": true, 00:06:46.582 "seek_hole": false, 00:06:46.582 "seek_data": false, 00:06:46.582 "copy": true, 00:06:46.582 "nvme_iov_md": false 00:06:46.582 }, 00:06:46.582 "memory_domains": [ 00:06:46.582 { 00:06:46.582 "dma_device_id": "system", 00:06:46.582 "dma_device_type": 1 00:06:46.582 }, 00:06:46.582 { 00:06:46.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.582 "dma_device_type": 2 00:06:46.582 } 00:06:46.582 ], 00:06:46.582 "driver_specific": { 00:06:46.582 "passthru": { 00:06:46.582 "name": "Passthru0", 00:06:46.582 "base_bdev_name": "Malloc0" 00:06:46.582 } 00:06:46.582 } 00:06:46.582 } 00:06:46.582 ]' 00:06:46.582 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:46.582 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:46.582 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.582 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.582 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.582 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:46.582 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:46.582 10:34:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:46.582 00:06:46.582 real 0m0.249s 00:06:46.582 user 0m0.151s 00:06:46.582 sys 0m0.040s 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:46.582 10:34:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.582 ************************************ 00:06:46.582 END TEST rpc_integrity 00:06:46.582 ************************************ 00:06:46.841 10:34:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:46.841 10:34:12 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:46.841 10:34:12 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:46.841 10:34:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.841 ************************************ 00:06:46.841 START TEST rpc_plugins 00:06:46.841 ************************************ 00:06:46.841 10:34:12 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:06:46.841 10:34:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:46.841 10:34:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.841 10:34:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:46.841 10:34:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.841 10:34:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:46.841 10:34:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:46.841 10:34:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.841 10:34:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:46.841 10:34:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.841 10:34:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:46.841 { 00:06:46.841 "name": "Malloc1", 00:06:46.841 "aliases": [ 00:06:46.841 "1559c00f-b99a-4a4c-ab4d-ff4678544091" 00:06:46.841 ], 00:06:46.841 "product_name": "Malloc disk", 00:06:46.841 "block_size": 4096, 00:06:46.841 "num_blocks": 256, 00:06:46.841 "uuid": "1559c00f-b99a-4a4c-ab4d-ff4678544091", 00:06:46.841 "assigned_rate_limits": { 00:06:46.841 "rw_ios_per_sec": 0, 00:06:46.841 "rw_mbytes_per_sec": 0, 00:06:46.841 "r_mbytes_per_sec": 0, 00:06:46.841 "w_mbytes_per_sec": 0 00:06:46.841 }, 00:06:46.841 "claimed": false, 00:06:46.841 "zoned": false, 00:06:46.841 "supported_io_types": { 00:06:46.841 "read": true, 00:06:46.841 "write": true, 00:06:46.841 "unmap": true, 00:06:46.841 "flush": true, 00:06:46.841 "reset": true, 00:06:46.841 "nvme_admin": false, 00:06:46.841 "nvme_io": false, 00:06:46.841 "nvme_io_md": false, 00:06:46.841 "write_zeroes": true, 00:06:46.841 "zcopy": true, 00:06:46.841 "get_zone_info": false, 00:06:46.841 "zone_management": false, 00:06:46.841 "zone_append": false, 00:06:46.841 "compare": false, 00:06:46.841 "compare_and_write": false, 00:06:46.841 "abort": true, 00:06:46.841 "seek_hole": false, 00:06:46.841 "seek_data": false, 00:06:46.841 "copy": true, 00:06:46.841 "nvme_iov_md": false 00:06:46.841 }, 00:06:46.841 "memory_domains": [ 00:06:46.841 { 00:06:46.841 "dma_device_id": "system", 00:06:46.841 "dma_device_type": 1 00:06:46.841 }, 00:06:46.841 { 00:06:46.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.841 "dma_device_type": 2 00:06:46.841 } 00:06:46.841 ], 00:06:46.841 "driver_specific": {} 00:06:46.841 } 00:06:46.841 ]' 00:06:46.841 10:34:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:46.841 10:34:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:46.841 10:34:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:46.841 10:34:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.841 10:34:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:46.841 10:34:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.841 10:34:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:46.841 10:34:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.841 10:34:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:46.841 10:34:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.841 10:34:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:46.841 10:34:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:46.841 10:34:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:46.841 00:06:46.841 real 0m0.145s 00:06:46.841 user 0m0.088s 00:06:46.841 sys 0m0.022s 00:06:46.841 10:34:12 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:46.841 10:34:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:46.841 ************************************ 00:06:46.841 END TEST rpc_plugins 00:06:46.841 ************************************ 00:06:46.841 10:34:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:46.841 10:34:12 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:46.841 10:34:12 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:46.841 10:34:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.099 ************************************ 00:06:47.099 START TEST rpc_trace_cmd_test 00:06:47.099 ************************************ 00:06:47.099 10:34:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:06:47.099 10:34:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:47.099 10:34:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:47.099 10:34:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.099 10:34:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.099 10:34:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.099 10:34:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:47.099 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2848737", 00:06:47.099 "tpoint_group_mask": "0x8", 00:06:47.099 "iscsi_conn": { 00:06:47.099 "mask": "0x2", 00:06:47.099 "tpoint_mask": "0x0" 00:06:47.099 }, 00:06:47.099 "scsi": { 00:06:47.099 "mask": "0x4", 00:06:47.099 "tpoint_mask": "0x0" 00:06:47.099 }, 00:06:47.099 "bdev": { 00:06:47.099 "mask": "0x8", 00:06:47.099 "tpoint_mask": "0xffffffffffffffff" 00:06:47.099 }, 00:06:47.099 "nvmf_rdma": { 00:06:47.099 "mask": "0x10", 00:06:47.099 "tpoint_mask": "0x0" 00:06:47.099 }, 00:06:47.099 "nvmf_tcp": { 00:06:47.099 "mask": "0x20", 00:06:47.099 "tpoint_mask": "0x0" 00:06:47.099 }, 00:06:47.099 "ftl": { 00:06:47.099 "mask": "0x40", 00:06:47.099 "tpoint_mask": "0x0" 00:06:47.099 }, 00:06:47.099 "blobfs": { 00:06:47.099 "mask": "0x80", 00:06:47.099 "tpoint_mask": "0x0" 00:06:47.099 }, 00:06:47.099 "dsa": { 00:06:47.100 "mask": "0x200", 00:06:47.100 "tpoint_mask": "0x0" 00:06:47.100 }, 00:06:47.100 "thread": { 00:06:47.100 "mask": "0x400", 00:06:47.100 "tpoint_mask": "0x0" 00:06:47.100 }, 00:06:47.100 "nvme_pcie": { 00:06:47.100 "mask": "0x800", 00:06:47.100 "tpoint_mask": "0x0" 00:06:47.100 }, 00:06:47.100 "iaa": { 00:06:47.100 "mask": "0x1000", 00:06:47.100 "tpoint_mask": "0x0" 00:06:47.100 }, 00:06:47.100 "nvme_tcp": { 00:06:47.100 "mask": "0x2000", 00:06:47.100 "tpoint_mask": "0x0" 00:06:47.100 }, 00:06:47.100 "bdev_nvme": { 00:06:47.100 "mask": "0x4000", 00:06:47.100 "tpoint_mask": "0x0" 00:06:47.100 }, 00:06:47.100 "sock": { 00:06:47.100 "mask": "0x8000", 00:06:47.100 "tpoint_mask": "0x0" 00:06:47.100 }, 00:06:47.100 "blob": { 00:06:47.100 "mask": "0x10000", 00:06:47.100 "tpoint_mask": "0x0" 00:06:47.100 }, 00:06:47.100 "bdev_raid": { 00:06:47.100 "mask": "0x20000", 00:06:47.100 "tpoint_mask": "0x0" 00:06:47.100 }, 00:06:47.100 "scheduler": { 00:06:47.100 "mask": "0x40000", 00:06:47.100 "tpoint_mask": "0x0" 00:06:47.100 } 00:06:47.100 }' 00:06:47.100 10:34:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:47.100 10:34:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:47.100 10:34:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:47.100 10:34:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:47.100 10:34:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:47.100 10:34:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:47.100 10:34:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:47.100 10:34:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:47.100 10:34:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:47.100 10:34:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:47.100 00:06:47.100 real 0m0.224s 00:06:47.100 user 0m0.187s 00:06:47.100 sys 0m0.031s 00:06:47.100 10:34:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:47.100 10:34:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.100 ************************************ 00:06:47.100 END TEST rpc_trace_cmd_test 00:06:47.100 ************************************ 00:06:47.358 10:34:13 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:47.358 10:34:13 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:47.358 10:34:13 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:47.358 10:34:13 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:47.358 10:34:13 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:47.358 10:34:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.358 ************************************ 00:06:47.358 START TEST rpc_daemon_integrity 00:06:47.358 ************************************ 00:06:47.358 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:06:47.358 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:47.358 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.358 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.358 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.358 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:47.358 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:47.358 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:47.358 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:47.358 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.358 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.358 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.358 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:47.358 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:47.358 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.358 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.358 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.358 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:47.358 { 00:06:47.358 "name": "Malloc2", 00:06:47.358 "aliases": [ 00:06:47.358 "d3a1ca24-700a-45d6-9d1b-fb21e1b009f5" 00:06:47.358 ], 00:06:47.359 "product_name": "Malloc disk", 00:06:47.359 "block_size": 512, 00:06:47.359 "num_blocks": 16384, 00:06:47.359 "uuid": "d3a1ca24-700a-45d6-9d1b-fb21e1b009f5", 00:06:47.359 "assigned_rate_limits": { 00:06:47.359 "rw_ios_per_sec": 0, 00:06:47.359 "rw_mbytes_per_sec": 0, 00:06:47.359 "r_mbytes_per_sec": 0, 00:06:47.359 "w_mbytes_per_sec": 0 00:06:47.359 }, 00:06:47.359 "claimed": false, 00:06:47.359 "zoned": false, 00:06:47.359 "supported_io_types": { 00:06:47.359 "read": true, 00:06:47.359 "write": true, 00:06:47.359 "unmap": true, 00:06:47.359 "flush": true, 00:06:47.359 "reset": true, 00:06:47.359 "nvme_admin": false, 00:06:47.359 "nvme_io": false, 00:06:47.359 "nvme_io_md": false, 00:06:47.359 "write_zeroes": true, 00:06:47.359 "zcopy": true, 00:06:47.359 "get_zone_info": false, 00:06:47.359 "zone_management": false, 00:06:47.359 "zone_append": false, 00:06:47.359 "compare": false, 00:06:47.359 "compare_and_write": false, 00:06:47.359 "abort": true, 00:06:47.359 "seek_hole": false, 00:06:47.359 "seek_data": false, 00:06:47.359 "copy": true, 00:06:47.359 "nvme_iov_md": false 00:06:47.359 }, 00:06:47.359 "memory_domains": [ 00:06:47.359 { 00:06:47.359 "dma_device_id": "system", 00:06:47.359 "dma_device_type": 1 00:06:47.359 }, 00:06:47.359 { 00:06:47.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.359 "dma_device_type": 2 00:06:47.359 } 00:06:47.359 ], 00:06:47.359 "driver_specific": {} 00:06:47.359 } 00:06:47.359 ]' 00:06:47.359 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:47.359 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:47.359 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:47.359 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.359 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.359 [2024-11-05 10:34:13.379324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:47.359 [2024-11-05 10:34:13.379363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.359 [2024-11-05 10:34:13.379388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x52491d0 00:06:47.359 [2024-11-05 10:34:13.379403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.359 [2024-11-05 10:34:13.380639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.359 [2024-11-05 10:34:13.380667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:47.359 Passthru0 00:06:47.359 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.359 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:47.359 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.359 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.359 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.359 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:47.359 { 00:06:47.359 "name": "Malloc2", 00:06:47.359 "aliases": [ 00:06:47.359 "d3a1ca24-700a-45d6-9d1b-fb21e1b009f5" 00:06:47.359 ], 00:06:47.359 "product_name": "Malloc disk", 00:06:47.359 "block_size": 512, 00:06:47.359 "num_blocks": 16384, 00:06:47.359 "uuid": "d3a1ca24-700a-45d6-9d1b-fb21e1b009f5", 00:06:47.359 "assigned_rate_limits": { 00:06:47.359 "rw_ios_per_sec": 0, 00:06:47.359 "rw_mbytes_per_sec": 0, 00:06:47.359 "r_mbytes_per_sec": 0, 00:06:47.359 "w_mbytes_per_sec": 0 00:06:47.359 }, 00:06:47.359 "claimed": true, 00:06:47.359 "claim_type": "exclusive_write", 00:06:47.359 "zoned": false, 00:06:47.359 "supported_io_types": { 00:06:47.359 "read": true, 00:06:47.359 "write": true, 00:06:47.359 "unmap": true, 00:06:47.359 "flush": true, 00:06:47.359 "reset": true, 00:06:47.359 "nvme_admin": false, 00:06:47.359 "nvme_io": false, 00:06:47.359 "nvme_io_md": false, 00:06:47.359 "write_zeroes": true, 00:06:47.359 "zcopy": true, 00:06:47.359 "get_zone_info": false, 00:06:47.359 "zone_management": false, 00:06:47.359 "zone_append": false, 00:06:47.359 "compare": false, 00:06:47.359 "compare_and_write": false, 00:06:47.359 "abort": true, 00:06:47.359 "seek_hole": false, 00:06:47.359 "seek_data": false, 00:06:47.359 "copy": true, 00:06:47.359 "nvme_iov_md": false 00:06:47.359 }, 00:06:47.359 "memory_domains": [ 00:06:47.359 { 00:06:47.359 "dma_device_id": "system", 00:06:47.359 "dma_device_type": 1 00:06:47.359 }, 00:06:47.359 { 00:06:47.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.359 "dma_device_type": 2 00:06:47.359 } 00:06:47.359 ], 00:06:47.359 "driver_specific": {} 00:06:47.359 }, 00:06:47.359 { 00:06:47.359 "name": "Passthru0", 00:06:47.359 "aliases": [ 00:06:47.359 "a5ebd56d-3f82-51c5-a5ae-732dcc5afbfb" 00:06:47.359 ], 00:06:47.359 "product_name": "passthru", 00:06:47.359 "block_size": 512, 00:06:47.359 "num_blocks": 16384, 00:06:47.359 "uuid": "a5ebd56d-3f82-51c5-a5ae-732dcc5afbfb", 00:06:47.359 "assigned_rate_limits": { 00:06:47.359 "rw_ios_per_sec": 0, 00:06:47.359 "rw_mbytes_per_sec": 0, 00:06:47.359 "r_mbytes_per_sec": 0, 00:06:47.359 "w_mbytes_per_sec": 0 00:06:47.359 }, 00:06:47.359 "claimed": false, 00:06:47.359 "zoned": false, 00:06:47.359 "supported_io_types": { 00:06:47.359 "read": true, 00:06:47.359 "write": true, 00:06:47.359 "unmap": true, 00:06:47.359 "flush": true, 00:06:47.359 "reset": true, 00:06:47.359 "nvme_admin": false, 00:06:47.359 "nvme_io": false, 00:06:47.359 "nvme_io_md": false, 00:06:47.359 "write_zeroes": true, 00:06:47.359 "zcopy": true, 00:06:47.359 "get_zone_info": false, 00:06:47.359 "zone_management": false, 00:06:47.359 "zone_append": false, 00:06:47.359 "compare": false, 00:06:47.359 "compare_and_write": false, 00:06:47.359 "abort": true, 00:06:47.359 "seek_hole": false, 00:06:47.359 "seek_data": false, 00:06:47.359 "copy": true, 00:06:47.359 "nvme_iov_md": false 00:06:47.359 }, 00:06:47.359 "memory_domains": [ 00:06:47.359 { 00:06:47.359 "dma_device_id": "system", 00:06:47.359 "dma_device_type": 1 00:06:47.359 }, 00:06:47.359 { 00:06:47.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.359 "dma_device_type": 2 00:06:47.359 } 00:06:47.359 ], 00:06:47.359 "driver_specific": { 00:06:47.359 "passthru": { 00:06:47.359 "name": "Passthru0", 00:06:47.359 "base_bdev_name": "Malloc2" 00:06:47.359 } 00:06:47.359 } 00:06:47.359 } 00:06:47.359 ]' 00:06:47.359 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:47.617 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:47.617 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:47.617 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.618 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.618 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.618 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:47.618 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.618 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.618 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.618 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:47.618 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.618 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.618 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.618 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:47.618 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:47.618 10:34:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:47.618 00:06:47.618 real 0m0.277s 00:06:47.618 user 0m0.179s 00:06:47.618 sys 0m0.046s 00:06:47.618 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:47.618 10:34:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.618 ************************************ 00:06:47.618 END TEST rpc_daemon_integrity 00:06:47.618 ************************************ 00:06:47.618 10:34:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:47.618 10:34:13 rpc -- rpc/rpc.sh@84 -- # killprocess 2848737 00:06:47.618 10:34:13 rpc -- common/autotest_common.sh@952 -- # '[' -z 2848737 ']' 00:06:47.618 10:34:13 rpc -- common/autotest_common.sh@956 -- # kill -0 2848737 00:06:47.618 10:34:13 rpc -- common/autotest_common.sh@957 -- # uname 00:06:47.618 10:34:13 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:47.618 10:34:13 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2848737 00:06:47.618 10:34:13 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:47.618 10:34:13 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:47.618 10:34:13 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2848737' 00:06:47.618 killing process with pid 2848737 00:06:47.618 10:34:13 rpc -- common/autotest_common.sh@971 -- # kill 2848737 00:06:47.618 10:34:13 rpc -- common/autotest_common.sh@976 -- # wait 2848737 00:06:48.185 00:06:48.185 real 0m2.206s 00:06:48.185 user 0m2.780s 00:06:48.185 sys 0m0.819s 00:06:48.185 10:34:13 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:48.185 10:34:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.185 ************************************ 00:06:48.185 END TEST rpc 00:06:48.185 ************************************ 00:06:48.185 10:34:14 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:48.185 10:34:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:48.185 10:34:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.185 10:34:14 -- common/autotest_common.sh@10 -- # set +x 00:06:48.185 ************************************ 00:06:48.185 START TEST skip_rpc 00:06:48.185 ************************************ 00:06:48.185 10:34:14 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:48.185 * Looking for test storage... 00:06:48.185 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:48.185 10:34:14 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:48.185 10:34:14 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:48.185 10:34:14 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:48.185 10:34:14 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.185 10:34:14 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:48.185 10:34:14 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.185 10:34:14 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:48.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.185 --rc genhtml_branch_coverage=1 00:06:48.185 --rc genhtml_function_coverage=1 00:06:48.185 --rc genhtml_legend=1 00:06:48.185 --rc geninfo_all_blocks=1 00:06:48.185 --rc geninfo_unexecuted_blocks=1 00:06:48.185 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:48.185 ' 00:06:48.185 10:34:14 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:48.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.185 --rc genhtml_branch_coverage=1 00:06:48.185 --rc genhtml_function_coverage=1 00:06:48.185 --rc genhtml_legend=1 00:06:48.185 --rc geninfo_all_blocks=1 00:06:48.185 --rc geninfo_unexecuted_blocks=1 00:06:48.185 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:48.185 ' 00:06:48.185 10:34:14 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:48.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.185 --rc genhtml_branch_coverage=1 00:06:48.185 --rc genhtml_function_coverage=1 00:06:48.185 --rc genhtml_legend=1 00:06:48.185 --rc geninfo_all_blocks=1 00:06:48.185 --rc geninfo_unexecuted_blocks=1 00:06:48.185 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:48.185 ' 00:06:48.185 10:34:14 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:48.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.185 --rc genhtml_branch_coverage=1 00:06:48.185 --rc genhtml_function_coverage=1 00:06:48.185 --rc genhtml_legend=1 00:06:48.185 --rc geninfo_all_blocks=1 00:06:48.185 --rc geninfo_unexecuted_blocks=1 00:06:48.185 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:48.185 ' 00:06:48.185 10:34:14 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:48.185 10:34:14 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:48.185 10:34:14 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:48.185 10:34:14 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:48.185 10:34:14 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.185 10:34:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.443 ************************************ 00:06:48.443 START TEST skip_rpc 00:06:48.443 ************************************ 00:06:48.443 10:34:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:06:48.443 10:34:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2849184 00:06:48.443 10:34:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:48.443 10:34:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:48.443 10:34:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:48.443 [2024-11-05 10:34:14.303745] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:06:48.443 [2024-11-05 10:34:14.303805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2849184 ] 00:06:48.443 [2024-11-05 10:34:14.428352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.443 [2024-11-05 10:34:14.482475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2849184 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 2849184 ']' 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 2849184 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2849184 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2849184' 00:06:53.800 killing process with pid 2849184 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 2849184 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 2849184 00:06:53.800 00:06:53.800 real 0m5.426s 00:06:53.800 user 0m5.121s 00:06:53.800 sys 0m0.338s 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:53.800 10:34:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.800 ************************************ 00:06:53.800 END TEST skip_rpc 00:06:53.800 ************************************ 00:06:53.800 10:34:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:53.800 10:34:19 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:53.800 10:34:19 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:53.800 10:34:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.800 ************************************ 00:06:53.800 START TEST skip_rpc_with_json 00:06:53.800 ************************************ 00:06:53.800 10:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:06:53.800 10:34:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:53.800 10:34:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2850004 00:06:53.800 10:34:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:53.800 10:34:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2850004 00:06:53.800 10:34:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.800 10:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 2850004 ']' 00:06:53.800 10:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.800 10:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:53.800 10:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.800 10:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:53.800 10:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:53.800 [2024-11-05 10:34:19.789892] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:06:53.800 [2024-11-05 10:34:19.789941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2850004 ] 00:06:54.057 [2024-11-05 10:34:19.898804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.057 [2024-11-05 10:34:19.957589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.315 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:54.315 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:06:54.315 10:34:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:54.315 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.315 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:54.315 [2024-11-05 10:34:20.203238] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:54.315 request: 00:06:54.315 { 00:06:54.315 "trtype": "tcp", 00:06:54.315 "method": "nvmf_get_transports", 00:06:54.315 "req_id": 1 00:06:54.315 } 00:06:54.315 Got JSON-RPC error response 00:06:54.315 response: 00:06:54.315 { 00:06:54.315 "code": -19, 00:06:54.315 "message": "No such device" 00:06:54.315 } 00:06:54.315 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:54.315 10:34:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:54.315 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.315 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:54.315 [2024-11-05 10:34:20.211354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.315 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.315 10:34:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:54.315 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.315 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:54.315 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.315 10:34:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:54.315 { 00:06:54.315 "subsystems": [ 00:06:54.315 { 00:06:54.315 "subsystem": "scheduler", 00:06:54.315 "config": [ 00:06:54.315 { 00:06:54.315 "method": "framework_set_scheduler", 00:06:54.315 "params": { 00:06:54.315 "name": "static" 00:06:54.315 } 00:06:54.315 } 00:06:54.315 ] 00:06:54.315 }, 00:06:54.315 { 00:06:54.315 "subsystem": "vmd", 00:06:54.315 "config": [] 00:06:54.315 }, 00:06:54.315 { 00:06:54.315 "subsystem": "sock", 00:06:54.315 "config": [ 00:06:54.316 { 00:06:54.316 "method": "sock_set_default_impl", 00:06:54.316 "params": { 00:06:54.316 "impl_name": "posix" 00:06:54.316 } 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "method": "sock_impl_set_options", 00:06:54.316 "params": { 00:06:54.316 "impl_name": "ssl", 00:06:54.316 "recv_buf_size": 4096, 00:06:54.316 "send_buf_size": 4096, 00:06:54.316 "enable_recv_pipe": true, 00:06:54.316 "enable_quickack": false, 00:06:54.316 "enable_placement_id": 0, 00:06:54.316 "enable_zerocopy_send_server": true, 00:06:54.316 "enable_zerocopy_send_client": false, 00:06:54.316 "zerocopy_threshold": 0, 00:06:54.316 "tls_version": 0, 00:06:54.316 "enable_ktls": false 00:06:54.316 } 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "method": "sock_impl_set_options", 00:06:54.316 "params": { 00:06:54.316 "impl_name": "posix", 00:06:54.316 "recv_buf_size": 2097152, 00:06:54.316 "send_buf_size": 2097152, 00:06:54.316 "enable_recv_pipe": true, 00:06:54.316 "enable_quickack": false, 00:06:54.316 "enable_placement_id": 0, 00:06:54.316 "enable_zerocopy_send_server": true, 00:06:54.316 "enable_zerocopy_send_client": false, 00:06:54.316 "zerocopy_threshold": 0, 00:06:54.316 "tls_version": 0, 00:06:54.316 "enable_ktls": false 00:06:54.316 } 00:06:54.316 } 00:06:54.316 ] 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "subsystem": "iobuf", 00:06:54.316 "config": [ 00:06:54.316 { 00:06:54.316 "method": "iobuf_set_options", 00:06:54.316 "params": { 00:06:54.316 "small_pool_count": 8192, 00:06:54.316 "large_pool_count": 1024, 00:06:54.316 "small_bufsize": 8192, 00:06:54.316 "large_bufsize": 135168, 00:06:54.316 "enable_numa": false 00:06:54.316 } 00:06:54.316 } 00:06:54.316 ] 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "subsystem": "keyring", 00:06:54.316 "config": [] 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "subsystem": "vfio_user_target", 00:06:54.316 "config": null 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "subsystem": "fsdev", 00:06:54.316 "config": [ 00:06:54.316 { 00:06:54.316 "method": "fsdev_set_opts", 00:06:54.316 "params": { 00:06:54.316 "fsdev_io_pool_size": 65535, 00:06:54.316 "fsdev_io_cache_size": 256 00:06:54.316 } 00:06:54.316 } 00:06:54.316 ] 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "subsystem": "accel", 00:06:54.316 "config": [ 00:06:54.316 { 00:06:54.316 "method": "accel_set_options", 00:06:54.316 "params": { 00:06:54.316 "small_cache_size": 128, 00:06:54.316 "large_cache_size": 16, 00:06:54.316 "task_count": 2048, 00:06:54.316 "sequence_count": 2048, 00:06:54.316 "buf_count": 2048 00:06:54.316 } 00:06:54.316 } 00:06:54.316 ] 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "subsystem": "bdev", 00:06:54.316 "config": [ 00:06:54.316 { 00:06:54.316 "method": "bdev_set_options", 00:06:54.316 "params": { 00:06:54.316 "bdev_io_pool_size": 65535, 00:06:54.316 "bdev_io_cache_size": 256, 00:06:54.316 "bdev_auto_examine": true, 00:06:54.316 "iobuf_small_cache_size": 128, 00:06:54.316 "iobuf_large_cache_size": 16 00:06:54.316 } 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "method": "bdev_raid_set_options", 00:06:54.316 "params": { 00:06:54.316 "process_window_size_kb": 1024, 00:06:54.316 "process_max_bandwidth_mb_sec": 0 00:06:54.316 } 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "method": "bdev_nvme_set_options", 00:06:54.316 "params": { 00:06:54.316 "action_on_timeout": "none", 00:06:54.316 "timeout_us": 0, 00:06:54.316 "timeout_admin_us": 0, 00:06:54.316 "keep_alive_timeout_ms": 10000, 00:06:54.316 "arbitration_burst": 0, 00:06:54.316 "low_priority_weight": 0, 00:06:54.316 "medium_priority_weight": 0, 00:06:54.316 "high_priority_weight": 0, 00:06:54.316 "nvme_adminq_poll_period_us": 10000, 00:06:54.316 "nvme_ioq_poll_period_us": 0, 00:06:54.316 "io_queue_requests": 0, 00:06:54.316 "delay_cmd_submit": true, 00:06:54.316 "transport_retry_count": 4, 00:06:54.316 "bdev_retry_count": 3, 00:06:54.316 "transport_ack_timeout": 0, 00:06:54.316 "ctrlr_loss_timeout_sec": 0, 00:06:54.316 "reconnect_delay_sec": 0, 00:06:54.316 "fast_io_fail_timeout_sec": 0, 00:06:54.316 "disable_auto_failback": false, 00:06:54.316 "generate_uuids": false, 00:06:54.316 "transport_tos": 0, 00:06:54.316 "nvme_error_stat": false, 00:06:54.316 "rdma_srq_size": 0, 00:06:54.316 "io_path_stat": false, 00:06:54.316 "allow_accel_sequence": false, 00:06:54.316 "rdma_max_cq_size": 0, 00:06:54.316 "rdma_cm_event_timeout_ms": 0, 00:06:54.316 "dhchap_digests": [ 00:06:54.316 "sha256", 00:06:54.316 "sha384", 00:06:54.316 "sha512" 00:06:54.316 ], 00:06:54.316 "dhchap_dhgroups": [ 00:06:54.316 "null", 00:06:54.316 "ffdhe2048", 00:06:54.316 "ffdhe3072", 00:06:54.316 "ffdhe4096", 00:06:54.316 "ffdhe6144", 00:06:54.316 "ffdhe8192" 00:06:54.316 ] 00:06:54.316 } 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "method": "bdev_nvme_set_hotplug", 00:06:54.316 "params": { 00:06:54.316 "period_us": 100000, 00:06:54.316 "enable": false 00:06:54.316 } 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "method": "bdev_iscsi_set_options", 00:06:54.316 "params": { 00:06:54.316 "timeout_sec": 30 00:06:54.316 } 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "method": "bdev_wait_for_examine" 00:06:54.316 } 00:06:54.316 ] 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "subsystem": "nvmf", 00:06:54.316 "config": [ 00:06:54.316 { 00:06:54.316 "method": "nvmf_set_config", 00:06:54.316 "params": { 00:06:54.316 "discovery_filter": "match_any", 00:06:54.316 "admin_cmd_passthru": { 00:06:54.316 "identify_ctrlr": false 00:06:54.316 }, 00:06:54.316 "dhchap_digests": [ 00:06:54.316 "sha256", 00:06:54.316 "sha384", 00:06:54.316 "sha512" 00:06:54.316 ], 00:06:54.316 "dhchap_dhgroups": [ 00:06:54.316 "null", 00:06:54.316 "ffdhe2048", 00:06:54.316 "ffdhe3072", 00:06:54.316 "ffdhe4096", 00:06:54.316 "ffdhe6144", 00:06:54.316 "ffdhe8192" 00:06:54.316 ] 00:06:54.316 } 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "method": "nvmf_set_max_subsystems", 00:06:54.316 "params": { 00:06:54.316 "max_subsystems": 1024 00:06:54.316 } 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "method": "nvmf_set_crdt", 00:06:54.316 "params": { 00:06:54.316 "crdt1": 0, 00:06:54.316 "crdt2": 0, 00:06:54.316 "crdt3": 0 00:06:54.316 } 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "method": "nvmf_create_transport", 00:06:54.316 "params": { 00:06:54.316 "trtype": "TCP", 00:06:54.316 "max_queue_depth": 128, 00:06:54.316 "max_io_qpairs_per_ctrlr": 127, 00:06:54.316 "in_capsule_data_size": 4096, 00:06:54.316 "max_io_size": 131072, 00:06:54.316 "io_unit_size": 131072, 00:06:54.316 "max_aq_depth": 128, 00:06:54.316 "num_shared_buffers": 511, 00:06:54.316 "buf_cache_size": 4294967295, 00:06:54.316 "dif_insert_or_strip": false, 00:06:54.316 "zcopy": false, 00:06:54.316 "c2h_success": true, 00:06:54.316 "sock_priority": 0, 00:06:54.316 "abort_timeout_sec": 1, 00:06:54.316 "ack_timeout": 0, 00:06:54.316 "data_wr_pool_size": 0 00:06:54.316 } 00:06:54.316 } 00:06:54.316 ] 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "subsystem": "nbd", 00:06:54.316 "config": [] 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "subsystem": "ublk", 00:06:54.316 "config": [] 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "subsystem": "vhost_blk", 00:06:54.316 "config": [] 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "subsystem": "scsi", 00:06:54.316 "config": null 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "subsystem": "iscsi", 00:06:54.316 "config": [ 00:06:54.316 { 00:06:54.316 "method": "iscsi_set_options", 00:06:54.316 "params": { 00:06:54.316 "node_base": "iqn.2016-06.io.spdk", 00:06:54.316 "max_sessions": 128, 00:06:54.316 "max_connections_per_session": 2, 00:06:54.316 "max_queue_depth": 64, 00:06:54.316 "default_time2wait": 2, 00:06:54.316 "default_time2retain": 20, 00:06:54.316 "first_burst_length": 8192, 00:06:54.316 "immediate_data": true, 00:06:54.316 "allow_duplicated_isid": false, 00:06:54.316 "error_recovery_level": 0, 00:06:54.316 "nop_timeout": 60, 00:06:54.316 "nop_in_interval": 30, 00:06:54.316 "disable_chap": false, 00:06:54.316 "require_chap": false, 00:06:54.316 "mutual_chap": false, 00:06:54.316 "chap_group": 0, 00:06:54.316 "max_large_datain_per_connection": 64, 00:06:54.316 "max_r2t_per_connection": 4, 00:06:54.316 "pdu_pool_size": 36864, 00:06:54.316 "immediate_data_pool_size": 16384, 00:06:54.316 "data_out_pool_size": 2048 00:06:54.316 } 00:06:54.316 } 00:06:54.316 ] 00:06:54.316 }, 00:06:54.316 { 00:06:54.316 "subsystem": "vhost_scsi", 00:06:54.316 "config": [] 00:06:54.316 } 00:06:54.316 ] 00:06:54.316 } 00:06:54.316 10:34:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:54.316 10:34:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2850004 00:06:54.316 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2850004 ']' 00:06:54.316 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2850004 00:06:54.317 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:54.317 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:54.317 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2850004 00:06:54.575 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:54.575 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:54.575 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2850004' 00:06:54.575 killing process with pid 2850004 00:06:54.575 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2850004 00:06:54.575 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2850004 00:06:54.833 10:34:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2850030 00:06:54.833 10:34:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:54.833 10:34:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:07:00.098 10:34:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2850030 00:07:00.098 10:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2850030 ']' 00:07:00.098 10:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2850030 00:07:00.098 10:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:07:00.098 10:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:00.098 10:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2850030 00:07:00.098 10:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:00.098 10:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:00.098 10:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2850030' 00:07:00.098 killing process with pid 2850030 00:07:00.098 10:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2850030 00:07:00.098 10:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2850030 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:07:00.357 00:07:00.357 real 0m6.446s 00:07:00.357 user 0m6.093s 00:07:00.357 sys 0m0.724s 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:00.357 ************************************ 00:07:00.357 END TEST skip_rpc_with_json 00:07:00.357 ************************************ 00:07:00.357 10:34:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:00.357 10:34:26 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:00.357 10:34:26 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:00.357 10:34:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.357 ************************************ 00:07:00.357 START TEST skip_rpc_with_delay 00:07:00.357 ************************************ 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:00.357 [2024-11-05 10:34:26.325231] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:00.357 00:07:00.357 real 0m0.042s 00:07:00.357 user 0m0.016s 00:07:00.357 sys 0m0.025s 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:00.357 10:34:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:00.357 ************************************ 00:07:00.357 END TEST skip_rpc_with_delay 00:07:00.357 ************************************ 00:07:00.357 10:34:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:00.357 10:34:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:00.358 10:34:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:00.358 10:34:26 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:00.358 10:34:26 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:00.358 10:34:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.358 ************************************ 00:07:00.358 START TEST exit_on_failed_rpc_init 00:07:00.358 ************************************ 00:07:00.358 10:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:07:00.358 10:34:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2850940 00:07:00.358 10:34:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2850940 00:07:00.358 10:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 2850940 ']' 00:07:00.358 10:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.358 10:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:00.358 10:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.358 10:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:00.358 10:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:00.358 10:34:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:00.617 [2024-11-05 10:34:26.436587] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:00.617 [2024-11-05 10:34:26.436646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2850940 ] 00:07:00.617 [2024-11-05 10:34:26.558150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.617 [2024-11-05 10:34:26.615498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.876 10:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:00.876 10:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:07:00.876 10:34:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:00.876 10:34:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:00.876 10:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:00.876 10:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:00.876 10:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:00.876 10:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.876 10:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:00.876 10:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.876 10:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:00.876 10:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.876 10:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:00.876 10:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:00.876 10:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:00.876 [2024-11-05 10:34:26.882086] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:00.876 [2024-11-05 10:34:26.882145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2850946 ] 00:07:01.135 [2024-11-05 10:34:26.961334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.135 [2024-11-05 10:34:27.007579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.135 [2024-11-05 10:34:27.007668] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:01.135 [2024-11-05 10:34:27.007680] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:01.135 [2024-11-05 10:34:27.007689] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.135 10:34:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:01.135 10:34:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.135 10:34:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:01.135 10:34:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:01.135 10:34:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:01.135 10:34:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.135 10:34:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:01.135 10:34:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2850940 00:07:01.135 10:34:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 2850940 ']' 00:07:01.135 10:34:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 2850940 00:07:01.135 10:34:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:07:01.135 10:34:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:01.135 10:34:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2850940 00:07:01.135 10:34:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:01.135 10:34:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:01.135 10:34:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2850940' 00:07:01.135 killing process with pid 2850940 00:07:01.135 10:34:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 2850940 00:07:01.135 10:34:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 2850940 00:07:01.393 00:07:01.393 real 0m1.060s 00:07:01.393 user 0m1.080s 00:07:01.393 sys 0m0.477s 00:07:01.393 10:34:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.393 10:34:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:01.393 ************************************ 00:07:01.393 END TEST exit_on_failed_rpc_init 00:07:01.393 ************************************ 00:07:01.652 10:34:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:07:01.652 00:07:01.652 real 0m13.467s 00:07:01.652 user 0m12.529s 00:07:01.652 sys 0m1.879s 00:07:01.652 10:34:27 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.652 10:34:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.652 ************************************ 00:07:01.652 END TEST skip_rpc 00:07:01.652 ************************************ 00:07:01.652 10:34:27 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:01.652 10:34:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:01.652 10:34:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.652 10:34:27 -- common/autotest_common.sh@10 -- # set +x 00:07:01.652 ************************************ 00:07:01.652 START TEST rpc_client 00:07:01.652 ************************************ 00:07:01.652 10:34:27 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:01.652 * Looking for test storage... 00:07:01.652 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:07:01.652 10:34:27 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:01.652 10:34:27 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:07:01.652 10:34:27 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:01.911 10:34:27 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.911 10:34:27 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:01.911 10:34:27 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.911 10:34:27 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:01.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.911 --rc genhtml_branch_coverage=1 00:07:01.911 --rc genhtml_function_coverage=1 00:07:01.911 --rc genhtml_legend=1 00:07:01.911 --rc geninfo_all_blocks=1 00:07:01.911 --rc geninfo_unexecuted_blocks=1 00:07:01.911 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:01.911 ' 00:07:01.911 10:34:27 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:01.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.911 --rc genhtml_branch_coverage=1 00:07:01.911 --rc genhtml_function_coverage=1 00:07:01.911 --rc genhtml_legend=1 00:07:01.911 --rc geninfo_all_blocks=1 00:07:01.911 --rc geninfo_unexecuted_blocks=1 00:07:01.911 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:01.911 ' 00:07:01.911 10:34:27 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:01.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.911 --rc genhtml_branch_coverage=1 00:07:01.911 --rc genhtml_function_coverage=1 00:07:01.911 --rc genhtml_legend=1 00:07:01.911 --rc geninfo_all_blocks=1 00:07:01.911 --rc geninfo_unexecuted_blocks=1 00:07:01.911 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:01.912 ' 00:07:01.912 10:34:27 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:01.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.912 --rc genhtml_branch_coverage=1 00:07:01.912 --rc genhtml_function_coverage=1 00:07:01.912 --rc genhtml_legend=1 00:07:01.912 --rc geninfo_all_blocks=1 00:07:01.912 --rc geninfo_unexecuted_blocks=1 00:07:01.912 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:01.912 ' 00:07:01.912 10:34:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:01.912 OK 00:07:01.912 10:34:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:01.912 00:07:01.912 real 0m0.232s 00:07:01.912 user 0m0.121s 00:07:01.912 sys 0m0.126s 00:07:01.912 10:34:27 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.912 10:34:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:01.912 ************************************ 00:07:01.912 END TEST rpc_client 00:07:01.912 ************************************ 00:07:01.912 10:34:27 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:07:01.912 10:34:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:01.912 10:34:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.912 10:34:27 -- common/autotest_common.sh@10 -- # set +x 00:07:01.912 ************************************ 00:07:01.912 START TEST json_config 00:07:01.912 ************************************ 00:07:01.912 10:34:27 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:07:01.912 10:34:27 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:01.912 10:34:27 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:07:01.912 10:34:27 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:02.171 10:34:28 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:02.171 10:34:28 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.171 10:34:28 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.171 10:34:28 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.171 10:34:28 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.171 10:34:28 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.171 10:34:28 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.171 10:34:28 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.171 10:34:28 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.171 10:34:28 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.171 10:34:28 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.171 10:34:28 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.171 10:34:28 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:02.171 10:34:28 json_config -- scripts/common.sh@345 -- # : 1 00:07:02.171 10:34:28 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.171 10:34:28 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.171 10:34:28 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:02.171 10:34:28 json_config -- scripts/common.sh@353 -- # local d=1 00:07:02.171 10:34:28 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.171 10:34:28 json_config -- scripts/common.sh@355 -- # echo 1 00:07:02.171 10:34:28 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.171 10:34:28 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:02.171 10:34:28 json_config -- scripts/common.sh@353 -- # local d=2 00:07:02.171 10:34:28 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.171 10:34:28 json_config -- scripts/common.sh@355 -- # echo 2 00:07:02.171 10:34:28 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.171 10:34:28 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.171 10:34:28 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.171 10:34:28 json_config -- scripts/common.sh@368 -- # return 0 00:07:02.171 10:34:28 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.171 10:34:28 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:02.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.171 --rc genhtml_branch_coverage=1 00:07:02.171 --rc genhtml_function_coverage=1 00:07:02.171 --rc genhtml_legend=1 00:07:02.171 --rc geninfo_all_blocks=1 00:07:02.172 --rc geninfo_unexecuted_blocks=1 00:07:02.172 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:02.172 ' 00:07:02.172 10:34:28 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:02.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.172 --rc genhtml_branch_coverage=1 00:07:02.172 --rc genhtml_function_coverage=1 00:07:02.172 --rc genhtml_legend=1 00:07:02.172 --rc geninfo_all_blocks=1 00:07:02.172 --rc geninfo_unexecuted_blocks=1 00:07:02.172 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:02.172 ' 00:07:02.172 10:34:28 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:02.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.172 --rc genhtml_branch_coverage=1 00:07:02.172 --rc genhtml_function_coverage=1 00:07:02.172 --rc genhtml_legend=1 00:07:02.172 --rc geninfo_all_blocks=1 00:07:02.172 --rc geninfo_unexecuted_blocks=1 00:07:02.172 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:02.172 ' 00:07:02.172 10:34:28 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:02.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.172 --rc genhtml_branch_coverage=1 00:07:02.172 --rc genhtml_function_coverage=1 00:07:02.172 --rc genhtml_legend=1 00:07:02.172 --rc geninfo_all_blocks=1 00:07:02.172 --rc geninfo_unexecuted_blocks=1 00:07:02.172 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:02.172 ' 00:07:02.172 10:34:28 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:02.172 10:34:28 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:02.172 10:34:28 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.172 10:34:28 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.172 10:34:28 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.172 10:34:28 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.172 10:34:28 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.172 10:34:28 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.172 10:34:28 json_config -- paths/export.sh@5 -- # export PATH 00:07:02.172 10:34:28 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@51 -- # : 0 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:02.172 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:02.172 10:34:28 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:02.172 10:34:28 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:07:02.172 10:34:28 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:02.172 10:34:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:02.172 10:34:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:02.172 10:34:28 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:02.172 10:34:28 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:02.172 WARNING: No tests are enabled so not running JSON configuration tests 00:07:02.172 10:34:28 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:02.172 00:07:02.172 real 0m0.188s 00:07:02.172 user 0m0.115s 00:07:02.172 sys 0m0.079s 00:07:02.172 10:34:28 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:02.172 10:34:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.172 ************************************ 00:07:02.172 END TEST json_config 00:07:02.172 ************************************ 00:07:02.172 10:34:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:02.172 10:34:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:02.172 10:34:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:02.172 10:34:28 -- common/autotest_common.sh@10 -- # set +x 00:07:02.172 ************************************ 00:07:02.172 START TEST json_config_extra_key 00:07:02.172 ************************************ 00:07:02.172 10:34:28 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:02.432 10:34:28 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:02.432 10:34:28 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:07:02.432 10:34:28 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:02.432 10:34:28 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:02.432 10:34:28 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.432 10:34:28 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:02.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.432 --rc genhtml_branch_coverage=1 00:07:02.432 --rc genhtml_function_coverage=1 00:07:02.432 --rc genhtml_legend=1 00:07:02.432 --rc geninfo_all_blocks=1 00:07:02.432 --rc geninfo_unexecuted_blocks=1 00:07:02.432 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:02.432 ' 00:07:02.432 10:34:28 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:02.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.432 --rc genhtml_branch_coverage=1 00:07:02.432 --rc genhtml_function_coverage=1 00:07:02.432 --rc genhtml_legend=1 00:07:02.432 --rc geninfo_all_blocks=1 00:07:02.432 --rc geninfo_unexecuted_blocks=1 00:07:02.432 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:02.432 ' 00:07:02.432 10:34:28 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:02.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.432 --rc genhtml_branch_coverage=1 00:07:02.432 --rc genhtml_function_coverage=1 00:07:02.432 --rc genhtml_legend=1 00:07:02.432 --rc geninfo_all_blocks=1 00:07:02.432 --rc geninfo_unexecuted_blocks=1 00:07:02.432 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:02.432 ' 00:07:02.432 10:34:28 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:02.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.432 --rc genhtml_branch_coverage=1 00:07:02.432 --rc genhtml_function_coverage=1 00:07:02.432 --rc genhtml_legend=1 00:07:02.432 --rc geninfo_all_blocks=1 00:07:02.432 --rc geninfo_unexecuted_blocks=1 00:07:02.432 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:02.432 ' 00:07:02.432 10:34:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.432 10:34:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:02.432 10:34:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.432 10:34:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.432 10:34:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.432 10:34:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.432 10:34:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.432 10:34:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.432 10:34:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.432 10:34:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.432 10:34:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.432 10:34:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.432 10:34:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:07:02.432 10:34:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:07:02.432 10:34:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.432 10:34:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.432 10:34:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:02.432 10:34:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.432 10:34:28 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.432 10:34:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.433 10:34:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.433 10:34:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.433 10:34:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.433 10:34:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:02.433 10:34:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.433 10:34:28 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:02.433 10:34:28 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:02.433 10:34:28 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:02.433 10:34:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.433 10:34:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.433 10:34:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.433 10:34:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:02.433 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:02.433 10:34:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:02.433 10:34:28 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:02.433 10:34:28 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:02.433 10:34:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:07:02.433 10:34:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:02.433 10:34:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:02.433 10:34:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:02.433 10:34:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:02.433 10:34:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:02.433 10:34:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:02.433 10:34:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:02.433 10:34:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:02.433 10:34:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:02.433 10:34:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:02.433 INFO: launching applications... 00:07:02.433 10:34:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:07:02.433 10:34:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:02.433 10:34:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:02.433 10:34:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:02.433 10:34:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:02.433 10:34:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:02.433 10:34:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:02.433 10:34:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:02.433 10:34:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2851297 00:07:02.433 10:34:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:02.433 Waiting for target to run... 00:07:02.433 10:34:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2851297 /var/tmp/spdk_tgt.sock 00:07:02.433 10:34:28 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 2851297 ']' 00:07:02.433 10:34:28 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:02.433 10:34:28 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:07:02.433 10:34:28 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:02.433 10:34:28 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:02.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:02.433 10:34:28 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:02.433 10:34:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:02.433 [2024-11-05 10:34:28.416611] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:02.433 [2024-11-05 10:34:28.416682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851297 ] 00:07:03.001 [2024-11-05 10:34:28.938912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.001 [2024-11-05 10:34:28.992907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.570 10:34:29 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:03.570 10:34:29 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:07:03.570 10:34:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:03.570 00:07:03.570 10:34:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:03.570 INFO: shutting down applications... 00:07:03.570 10:34:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:03.570 10:34:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:03.570 10:34:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:03.570 10:34:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2851297 ]] 00:07:03.570 10:34:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2851297 00:07:03.570 10:34:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:03.570 10:34:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:03.570 10:34:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2851297 00:07:03.570 10:34:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:04.138 10:34:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:04.138 10:34:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:04.138 10:34:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2851297 00:07:04.138 10:34:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:04.138 10:34:29 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:04.138 10:34:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:04.138 10:34:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:04.138 SPDK target shutdown done 00:07:04.138 10:34:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:04.138 Success 00:07:04.138 00:07:04.138 real 0m1.766s 00:07:04.138 user 0m1.485s 00:07:04.138 sys 0m0.671s 00:07:04.138 10:34:29 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:04.138 10:34:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:04.138 ************************************ 00:07:04.138 END TEST json_config_extra_key 00:07:04.138 ************************************ 00:07:04.138 10:34:29 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:04.138 10:34:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:04.138 10:34:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:04.138 10:34:29 -- common/autotest_common.sh@10 -- # set +x 00:07:04.138 ************************************ 00:07:04.138 START TEST alias_rpc 00:07:04.138 ************************************ 00:07:04.138 10:34:30 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:04.138 * Looking for test storage... 00:07:04.138 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:07:04.139 10:34:30 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:04.139 10:34:30 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:04.139 10:34:30 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:04.139 10:34:30 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.139 10:34:30 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:04.139 10:34:30 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.139 10:34:30 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:04.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.139 --rc genhtml_branch_coverage=1 00:07:04.139 --rc genhtml_function_coverage=1 00:07:04.139 --rc genhtml_legend=1 00:07:04.139 --rc geninfo_all_blocks=1 00:07:04.139 --rc geninfo_unexecuted_blocks=1 00:07:04.139 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:04.139 ' 00:07:04.398 10:34:30 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:04.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.398 --rc genhtml_branch_coverage=1 00:07:04.398 --rc genhtml_function_coverage=1 00:07:04.398 --rc genhtml_legend=1 00:07:04.398 --rc geninfo_all_blocks=1 00:07:04.398 --rc geninfo_unexecuted_blocks=1 00:07:04.398 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:04.398 ' 00:07:04.398 10:34:30 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:04.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.398 --rc genhtml_branch_coverage=1 00:07:04.398 --rc genhtml_function_coverage=1 00:07:04.398 --rc genhtml_legend=1 00:07:04.398 --rc geninfo_all_blocks=1 00:07:04.398 --rc geninfo_unexecuted_blocks=1 00:07:04.398 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:04.398 ' 00:07:04.398 10:34:30 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:04.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.398 --rc genhtml_branch_coverage=1 00:07:04.398 --rc genhtml_function_coverage=1 00:07:04.398 --rc genhtml_legend=1 00:07:04.398 --rc geninfo_all_blocks=1 00:07:04.398 --rc geninfo_unexecuted_blocks=1 00:07:04.398 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:04.398 ' 00:07:04.398 10:34:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:04.398 10:34:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2851636 00:07:04.398 10:34:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2851636 00:07:04.398 10:34:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:04.398 10:34:30 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 2851636 ']' 00:07:04.398 10:34:30 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.398 10:34:30 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:04.398 10:34:30 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.398 10:34:30 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:04.398 10:34:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.398 [2024-11-05 10:34:30.243524] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:04.398 [2024-11-05 10:34:30.243593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851636 ] 00:07:04.398 [2024-11-05 10:34:30.355933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.398 [2024-11-05 10:34:30.412816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.657 10:34:30 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:04.657 10:34:30 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:04.657 10:34:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:04.915 10:34:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2851636 00:07:04.915 10:34:30 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 2851636 ']' 00:07:04.915 10:34:30 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 2851636 00:07:04.915 10:34:30 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:07:04.915 10:34:30 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:04.915 10:34:30 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2851636 00:07:05.174 10:34:31 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:05.174 10:34:31 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:05.174 10:34:31 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2851636' 00:07:05.174 killing process with pid 2851636 00:07:05.174 10:34:31 alias_rpc -- common/autotest_common.sh@971 -- # kill 2851636 00:07:05.174 10:34:31 alias_rpc -- common/autotest_common.sh@976 -- # wait 2851636 00:07:05.433 00:07:05.433 real 0m1.370s 00:07:05.433 user 0m1.457s 00:07:05.433 sys 0m0.516s 00:07:05.433 10:34:31 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:05.433 10:34:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.433 ************************************ 00:07:05.433 END TEST alias_rpc 00:07:05.433 ************************************ 00:07:05.433 10:34:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:05.433 10:34:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:05.433 10:34:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:05.433 10:34:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:05.433 10:34:31 -- common/autotest_common.sh@10 -- # set +x 00:07:05.433 ************************************ 00:07:05.433 START TEST spdkcli_tcp 00:07:05.433 ************************************ 00:07:05.433 10:34:31 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:05.693 * Looking for test storage... 00:07:05.693 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:07:05.693 10:34:31 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:05.693 10:34:31 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:07:05.693 10:34:31 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:05.693 10:34:31 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.693 10:34:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:05.693 10:34:31 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.693 10:34:31 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:05.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.693 --rc genhtml_branch_coverage=1 00:07:05.693 --rc genhtml_function_coverage=1 00:07:05.693 --rc genhtml_legend=1 00:07:05.693 --rc geninfo_all_blocks=1 00:07:05.693 --rc geninfo_unexecuted_blocks=1 00:07:05.693 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:05.693 ' 00:07:05.693 10:34:31 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:05.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.693 --rc genhtml_branch_coverage=1 00:07:05.693 --rc genhtml_function_coverage=1 00:07:05.693 --rc genhtml_legend=1 00:07:05.693 --rc geninfo_all_blocks=1 00:07:05.693 --rc geninfo_unexecuted_blocks=1 00:07:05.693 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:05.693 ' 00:07:05.693 10:34:31 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:05.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.693 --rc genhtml_branch_coverage=1 00:07:05.693 --rc genhtml_function_coverage=1 00:07:05.693 --rc genhtml_legend=1 00:07:05.693 --rc geninfo_all_blocks=1 00:07:05.693 --rc geninfo_unexecuted_blocks=1 00:07:05.693 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:05.693 ' 00:07:05.693 10:34:31 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:05.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.693 --rc genhtml_branch_coverage=1 00:07:05.693 --rc genhtml_function_coverage=1 00:07:05.693 --rc genhtml_legend=1 00:07:05.693 --rc geninfo_all_blocks=1 00:07:05.693 --rc geninfo_unexecuted_blocks=1 00:07:05.693 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:05.693 ' 00:07:05.693 10:34:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:07:05.693 10:34:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:05.693 10:34:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:07:05.693 10:34:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:05.693 10:34:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:05.693 10:34:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:05.693 10:34:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:05.693 10:34:31 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:05.693 10:34:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.693 10:34:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:05.693 10:34:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2851932 00:07:05.693 10:34:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2851932 00:07:05.693 10:34:31 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 2851932 ']' 00:07:05.693 10:34:31 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.693 10:34:31 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:05.693 10:34:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.693 10:34:31 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:05.693 10:34:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.693 [2024-11-05 10:34:31.690627] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:05.693 [2024-11-05 10:34:31.690683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851932 ] 00:07:05.952 [2024-11-05 10:34:31.797063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.952 [2024-11-05 10:34:31.852341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.952 [2024-11-05 10:34:31.852357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.212 10:34:32 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:06.212 10:34:32 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:07:06.212 10:34:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2851942 00:07:06.212 10:34:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:06.212 10:34:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:06.212 [ 00:07:06.212 "spdk_get_version", 00:07:06.212 "rpc_get_methods", 00:07:06.212 "notify_get_notifications", 00:07:06.212 "notify_get_types", 00:07:06.212 "trace_get_info", 00:07:06.212 "trace_get_tpoint_group_mask", 00:07:06.212 "trace_disable_tpoint_group", 00:07:06.212 "trace_enable_tpoint_group", 00:07:06.212 "trace_clear_tpoint_mask", 00:07:06.212 "trace_set_tpoint_mask", 00:07:06.212 "fsdev_set_opts", 00:07:06.212 "fsdev_get_opts", 00:07:06.212 "framework_get_pci_devices", 00:07:06.212 "framework_get_config", 00:07:06.212 "framework_get_subsystems", 00:07:06.212 "vfu_tgt_set_base_path", 00:07:06.212 "keyring_get_keys", 00:07:06.212 "iobuf_get_stats", 00:07:06.212 "iobuf_set_options", 00:07:06.212 "sock_get_default_impl", 00:07:06.212 "sock_set_default_impl", 00:07:06.212 "sock_impl_set_options", 00:07:06.212 "sock_impl_get_options", 00:07:06.212 "vmd_rescan", 00:07:06.212 "vmd_remove_device", 00:07:06.212 "vmd_enable", 00:07:06.212 "accel_get_stats", 00:07:06.212 "accel_set_options", 00:07:06.212 "accel_set_driver", 00:07:06.212 "accel_crypto_key_destroy", 00:07:06.212 "accel_crypto_keys_get", 00:07:06.212 "accel_crypto_key_create", 00:07:06.212 "accel_assign_opc", 00:07:06.212 "accel_get_module_info", 00:07:06.212 "accel_get_opc_assignments", 00:07:06.212 "bdev_get_histogram", 00:07:06.212 "bdev_enable_histogram", 00:07:06.212 "bdev_set_qos_limit", 00:07:06.212 "bdev_set_qd_sampling_period", 00:07:06.212 "bdev_get_bdevs", 00:07:06.212 "bdev_reset_iostat", 00:07:06.212 "bdev_get_iostat", 00:07:06.212 "bdev_examine", 00:07:06.212 "bdev_wait_for_examine", 00:07:06.212 "bdev_set_options", 00:07:06.212 "scsi_get_devices", 00:07:06.212 "thread_set_cpumask", 00:07:06.212 "scheduler_set_options", 00:07:06.212 "framework_get_governor", 00:07:06.212 "framework_get_scheduler", 00:07:06.212 "framework_set_scheduler", 00:07:06.212 "framework_get_reactors", 00:07:06.212 "thread_get_io_channels", 00:07:06.212 "thread_get_pollers", 00:07:06.212 "thread_get_stats", 00:07:06.212 "framework_monitor_context_switch", 00:07:06.212 "spdk_kill_instance", 00:07:06.212 "log_enable_timestamps", 00:07:06.212 "log_get_flags", 00:07:06.212 "log_clear_flag", 00:07:06.212 "log_set_flag", 00:07:06.212 "log_get_level", 00:07:06.212 "log_set_level", 00:07:06.212 "log_get_print_level", 00:07:06.212 "log_set_print_level", 00:07:06.212 "framework_enable_cpumask_locks", 00:07:06.212 "framework_disable_cpumask_locks", 00:07:06.212 "framework_wait_init", 00:07:06.212 "framework_start_init", 00:07:06.212 "virtio_blk_create_transport", 00:07:06.212 "virtio_blk_get_transports", 00:07:06.212 "vhost_controller_set_coalescing", 00:07:06.212 "vhost_get_controllers", 00:07:06.212 "vhost_delete_controller", 00:07:06.212 "vhost_create_blk_controller", 00:07:06.212 "vhost_scsi_controller_remove_target", 00:07:06.212 "vhost_scsi_controller_add_target", 00:07:06.212 "vhost_start_scsi_controller", 00:07:06.212 "vhost_create_scsi_controller", 00:07:06.212 "ublk_recover_disk", 00:07:06.212 "ublk_get_disks", 00:07:06.212 "ublk_stop_disk", 00:07:06.212 "ublk_start_disk", 00:07:06.212 "ublk_destroy_target", 00:07:06.212 "ublk_create_target", 00:07:06.212 "nbd_get_disks", 00:07:06.212 "nbd_stop_disk", 00:07:06.212 "nbd_start_disk", 00:07:06.213 "env_dpdk_get_mem_stats", 00:07:06.213 "nvmf_stop_mdns_prr", 00:07:06.213 "nvmf_publish_mdns_prr", 00:07:06.213 "nvmf_subsystem_get_listeners", 00:07:06.213 "nvmf_subsystem_get_qpairs", 00:07:06.213 "nvmf_subsystem_get_controllers", 00:07:06.213 "nvmf_get_stats", 00:07:06.213 "nvmf_get_transports", 00:07:06.213 "nvmf_create_transport", 00:07:06.213 "nvmf_get_targets", 00:07:06.213 "nvmf_delete_target", 00:07:06.213 "nvmf_create_target", 00:07:06.213 "nvmf_subsystem_allow_any_host", 00:07:06.213 "nvmf_subsystem_set_keys", 00:07:06.213 "nvmf_subsystem_remove_host", 00:07:06.213 "nvmf_subsystem_add_host", 00:07:06.213 "nvmf_ns_remove_host", 00:07:06.213 "nvmf_ns_add_host", 00:07:06.213 "nvmf_subsystem_remove_ns", 00:07:06.213 "nvmf_subsystem_set_ns_ana_group", 00:07:06.213 "nvmf_subsystem_add_ns", 00:07:06.213 "nvmf_subsystem_listener_set_ana_state", 00:07:06.213 "nvmf_discovery_get_referrals", 00:07:06.213 "nvmf_discovery_remove_referral", 00:07:06.213 "nvmf_discovery_add_referral", 00:07:06.213 "nvmf_subsystem_remove_listener", 00:07:06.213 "nvmf_subsystem_add_listener", 00:07:06.213 "nvmf_delete_subsystem", 00:07:06.213 "nvmf_create_subsystem", 00:07:06.213 "nvmf_get_subsystems", 00:07:06.213 "nvmf_set_crdt", 00:07:06.213 "nvmf_set_config", 00:07:06.213 "nvmf_set_max_subsystems", 00:07:06.213 "iscsi_get_histogram", 00:07:06.213 "iscsi_enable_histogram", 00:07:06.213 "iscsi_set_options", 00:07:06.213 "iscsi_get_auth_groups", 00:07:06.213 "iscsi_auth_group_remove_secret", 00:07:06.213 "iscsi_auth_group_add_secret", 00:07:06.213 "iscsi_delete_auth_group", 00:07:06.213 "iscsi_create_auth_group", 00:07:06.213 "iscsi_set_discovery_auth", 00:07:06.213 "iscsi_get_options", 00:07:06.213 "iscsi_target_node_request_logout", 00:07:06.213 "iscsi_target_node_set_redirect", 00:07:06.213 "iscsi_target_node_set_auth", 00:07:06.213 "iscsi_target_node_add_lun", 00:07:06.213 "iscsi_get_stats", 00:07:06.213 "iscsi_get_connections", 00:07:06.213 "iscsi_portal_group_set_auth", 00:07:06.213 "iscsi_start_portal_group", 00:07:06.213 "iscsi_delete_portal_group", 00:07:06.213 "iscsi_create_portal_group", 00:07:06.213 "iscsi_get_portal_groups", 00:07:06.213 "iscsi_delete_target_node", 00:07:06.213 "iscsi_target_node_remove_pg_ig_maps", 00:07:06.213 "iscsi_target_node_add_pg_ig_maps", 00:07:06.213 "iscsi_create_target_node", 00:07:06.213 "iscsi_get_target_nodes", 00:07:06.213 "iscsi_delete_initiator_group", 00:07:06.213 "iscsi_initiator_group_remove_initiators", 00:07:06.213 "iscsi_initiator_group_add_initiators", 00:07:06.213 "iscsi_create_initiator_group", 00:07:06.213 "iscsi_get_initiator_groups", 00:07:06.213 "fsdev_aio_delete", 00:07:06.213 "fsdev_aio_create", 00:07:06.213 "keyring_linux_set_options", 00:07:06.213 "keyring_file_remove_key", 00:07:06.213 "keyring_file_add_key", 00:07:06.213 "vfu_virtio_create_fs_endpoint", 00:07:06.213 "vfu_virtio_create_scsi_endpoint", 00:07:06.213 "vfu_virtio_scsi_remove_target", 00:07:06.213 "vfu_virtio_scsi_add_target", 00:07:06.213 "vfu_virtio_create_blk_endpoint", 00:07:06.213 "vfu_virtio_delete_endpoint", 00:07:06.213 "iaa_scan_accel_module", 00:07:06.213 "dsa_scan_accel_module", 00:07:06.213 "ioat_scan_accel_module", 00:07:06.213 "accel_error_inject_error", 00:07:06.213 "bdev_iscsi_delete", 00:07:06.213 "bdev_iscsi_create", 00:07:06.213 "bdev_iscsi_set_options", 00:07:06.213 "bdev_virtio_attach_controller", 00:07:06.213 "bdev_virtio_scsi_get_devices", 00:07:06.213 "bdev_virtio_detach_controller", 00:07:06.213 "bdev_virtio_blk_set_hotplug", 00:07:06.213 "bdev_ftl_set_property", 00:07:06.213 "bdev_ftl_get_properties", 00:07:06.213 "bdev_ftl_get_stats", 00:07:06.213 "bdev_ftl_unmap", 00:07:06.213 "bdev_ftl_unload", 00:07:06.213 "bdev_ftl_delete", 00:07:06.213 "bdev_ftl_load", 00:07:06.213 "bdev_ftl_create", 00:07:06.213 "bdev_aio_delete", 00:07:06.213 "bdev_aio_rescan", 00:07:06.213 "bdev_aio_create", 00:07:06.213 "blobfs_create", 00:07:06.213 "blobfs_detect", 00:07:06.213 "blobfs_set_cache_size", 00:07:06.213 "bdev_zone_block_delete", 00:07:06.213 "bdev_zone_block_create", 00:07:06.213 "bdev_delay_delete", 00:07:06.213 "bdev_delay_create", 00:07:06.213 "bdev_delay_update_latency", 00:07:06.213 "bdev_split_delete", 00:07:06.213 "bdev_split_create", 00:07:06.213 "bdev_error_inject_error", 00:07:06.213 "bdev_error_delete", 00:07:06.213 "bdev_error_create", 00:07:06.213 "bdev_raid_set_options", 00:07:06.213 "bdev_raid_remove_base_bdev", 00:07:06.213 "bdev_raid_add_base_bdev", 00:07:06.213 "bdev_raid_delete", 00:07:06.213 "bdev_raid_create", 00:07:06.213 "bdev_raid_get_bdevs", 00:07:06.213 "bdev_lvol_set_parent_bdev", 00:07:06.213 "bdev_lvol_set_parent", 00:07:06.213 "bdev_lvol_check_shallow_copy", 00:07:06.213 "bdev_lvol_start_shallow_copy", 00:07:06.213 "bdev_lvol_grow_lvstore", 00:07:06.213 "bdev_lvol_get_lvols", 00:07:06.213 "bdev_lvol_get_lvstores", 00:07:06.213 "bdev_lvol_delete", 00:07:06.213 "bdev_lvol_set_read_only", 00:07:06.213 "bdev_lvol_resize", 00:07:06.213 "bdev_lvol_decouple_parent", 00:07:06.213 "bdev_lvol_inflate", 00:07:06.213 "bdev_lvol_rename", 00:07:06.213 "bdev_lvol_clone_bdev", 00:07:06.213 "bdev_lvol_clone", 00:07:06.213 "bdev_lvol_snapshot", 00:07:06.213 "bdev_lvol_create", 00:07:06.213 "bdev_lvol_delete_lvstore", 00:07:06.213 "bdev_lvol_rename_lvstore", 00:07:06.213 "bdev_lvol_create_lvstore", 00:07:06.213 "bdev_passthru_delete", 00:07:06.213 "bdev_passthru_create", 00:07:06.213 "bdev_nvme_cuse_unregister", 00:07:06.213 "bdev_nvme_cuse_register", 00:07:06.213 "bdev_opal_new_user", 00:07:06.213 "bdev_opal_set_lock_state", 00:07:06.213 "bdev_opal_delete", 00:07:06.213 "bdev_opal_get_info", 00:07:06.213 "bdev_opal_create", 00:07:06.213 "bdev_nvme_opal_revert", 00:07:06.213 "bdev_nvme_opal_init", 00:07:06.213 "bdev_nvme_send_cmd", 00:07:06.213 "bdev_nvme_set_keys", 00:07:06.213 "bdev_nvme_get_path_iostat", 00:07:06.213 "bdev_nvme_get_mdns_discovery_info", 00:07:06.213 "bdev_nvme_stop_mdns_discovery", 00:07:06.213 "bdev_nvme_start_mdns_discovery", 00:07:06.213 "bdev_nvme_set_multipath_policy", 00:07:06.213 "bdev_nvme_set_preferred_path", 00:07:06.213 "bdev_nvme_get_io_paths", 00:07:06.213 "bdev_nvme_remove_error_injection", 00:07:06.213 "bdev_nvme_add_error_injection", 00:07:06.213 "bdev_nvme_get_discovery_info", 00:07:06.213 "bdev_nvme_stop_discovery", 00:07:06.213 "bdev_nvme_start_discovery", 00:07:06.213 "bdev_nvme_get_controller_health_info", 00:07:06.213 "bdev_nvme_disable_controller", 00:07:06.213 "bdev_nvme_enable_controller", 00:07:06.213 "bdev_nvme_reset_controller", 00:07:06.213 "bdev_nvme_get_transport_statistics", 00:07:06.213 "bdev_nvme_apply_firmware", 00:07:06.213 "bdev_nvme_detach_controller", 00:07:06.213 "bdev_nvme_get_controllers", 00:07:06.213 "bdev_nvme_attach_controller", 00:07:06.213 "bdev_nvme_set_hotplug", 00:07:06.213 "bdev_nvme_set_options", 00:07:06.213 "bdev_null_resize", 00:07:06.213 "bdev_null_delete", 00:07:06.213 "bdev_null_create", 00:07:06.213 "bdev_malloc_delete", 00:07:06.213 "bdev_malloc_create" 00:07:06.213 ] 00:07:06.473 10:34:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:06.473 10:34:32 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:06.473 10:34:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:06.473 10:34:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:06.473 10:34:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2851932 00:07:06.473 10:34:32 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 2851932 ']' 00:07:06.473 10:34:32 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 2851932 00:07:06.473 10:34:32 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:07:06.473 10:34:32 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:06.473 10:34:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2851932 00:07:06.473 10:34:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:06.473 10:34:32 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:06.473 10:34:32 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2851932' 00:07:06.473 killing process with pid 2851932 00:07:06.473 10:34:32 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 2851932 00:07:06.473 10:34:32 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 2851932 00:07:06.732 00:07:06.732 real 0m1.274s 00:07:06.732 user 0m2.141s 00:07:06.732 sys 0m0.561s 00:07:06.732 10:34:32 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:06.732 10:34:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:06.732 ************************************ 00:07:06.732 END TEST spdkcli_tcp 00:07:06.732 ************************************ 00:07:06.732 10:34:32 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:06.732 10:34:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:06.732 10:34:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:06.732 10:34:32 -- common/autotest_common.sh@10 -- # set +x 00:07:06.992 ************************************ 00:07:06.992 START TEST dpdk_mem_utility 00:07:06.992 ************************************ 00:07:06.992 10:34:32 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:06.992 * Looking for test storage... 00:07:06.992 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:07:06.992 10:34:32 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:06.992 10:34:32 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:07:06.992 10:34:32 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:06.992 10:34:32 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:06.992 10:34:32 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.992 10:34:32 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.992 10:34:32 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.992 10:34:32 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.992 10:34:32 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.992 10:34:32 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.992 10:34:32 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.992 10:34:32 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.992 10:34:32 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.992 10:34:32 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.992 10:34:32 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.992 10:34:32 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:06.992 10:34:32 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:06.992 10:34:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.992 10:34:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.992 10:34:32 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:06.992 10:34:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:06.992 10:34:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.992 10:34:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:06.992 10:34:33 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.992 10:34:33 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:06.992 10:34:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:06.992 10:34:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.992 10:34:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:06.992 10:34:33 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.992 10:34:33 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.992 10:34:33 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.992 10:34:33 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:06.992 10:34:33 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.992 10:34:33 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:06.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.992 --rc genhtml_branch_coverage=1 00:07:06.992 --rc genhtml_function_coverage=1 00:07:06.992 --rc genhtml_legend=1 00:07:06.992 --rc geninfo_all_blocks=1 00:07:06.992 --rc geninfo_unexecuted_blocks=1 00:07:06.992 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:06.992 ' 00:07:06.992 10:34:33 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:06.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.992 --rc genhtml_branch_coverage=1 00:07:06.992 --rc genhtml_function_coverage=1 00:07:06.992 --rc genhtml_legend=1 00:07:06.992 --rc geninfo_all_blocks=1 00:07:06.992 --rc geninfo_unexecuted_blocks=1 00:07:06.992 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:06.992 ' 00:07:06.992 10:34:33 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:06.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.992 --rc genhtml_branch_coverage=1 00:07:06.992 --rc genhtml_function_coverage=1 00:07:06.992 --rc genhtml_legend=1 00:07:06.992 --rc geninfo_all_blocks=1 00:07:06.992 --rc geninfo_unexecuted_blocks=1 00:07:06.992 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:06.992 ' 00:07:06.992 10:34:33 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:06.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.992 --rc genhtml_branch_coverage=1 00:07:06.992 --rc genhtml_function_coverage=1 00:07:06.992 --rc genhtml_legend=1 00:07:06.992 --rc geninfo_all_blocks=1 00:07:06.992 --rc geninfo_unexecuted_blocks=1 00:07:06.992 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:06.992 ' 00:07:06.992 10:34:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:06.992 10:34:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2852173 00:07:06.992 10:34:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2852173 00:07:06.992 10:34:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:06.992 10:34:33 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 2852173 ']' 00:07:06.992 10:34:33 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.992 10:34:33 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:06.992 10:34:33 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.992 10:34:33 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:06.992 10:34:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:06.993 [2024-11-05 10:34:33.039749] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:06.993 [2024-11-05 10:34:33.039825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852173 ] 00:07:07.252 [2024-11-05 10:34:33.162856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.252 [2024-11-05 10:34:33.222479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.191 10:34:33 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:08.191 10:34:33 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:07:08.191 10:34:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:08.191 10:34:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:08.191 10:34:33 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.191 10:34:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:08.191 { 00:07:08.191 "filename": "/tmp/spdk_mem_dump.txt" 00:07:08.191 } 00:07:08.191 10:34:33 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.191 10:34:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:08.191 DPDK memory size 810.000000 MiB in 1 heap(s) 00:07:08.191 1 heaps totaling size 810.000000 MiB 00:07:08.191 size: 810.000000 MiB heap id: 0 00:07:08.191 end heaps---------- 00:07:08.191 9 mempools totaling size 595.772034 MiB 00:07:08.191 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:08.191 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:08.191 size: 92.545471 MiB name: bdev_io_2852173 00:07:08.191 size: 50.003479 MiB name: msgpool_2852173 00:07:08.191 size: 36.509338 MiB name: fsdev_io_2852173 00:07:08.191 size: 21.763794 MiB name: PDU_Pool 00:07:08.191 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:08.191 size: 4.133484 MiB name: evtpool_2852173 00:07:08.191 size: 0.026123 MiB name: Session_Pool 00:07:08.191 end mempools------- 00:07:08.191 6 memzones totaling size 4.142822 MiB 00:07:08.191 size: 1.000366 MiB name: RG_ring_0_2852173 00:07:08.191 size: 1.000366 MiB name: RG_ring_1_2852173 00:07:08.191 size: 1.000366 MiB name: RG_ring_4_2852173 00:07:08.191 size: 1.000366 MiB name: RG_ring_5_2852173 00:07:08.191 size: 0.125366 MiB name: RG_ring_2_2852173 00:07:08.191 size: 0.015991 MiB name: RG_ring_3_2852173 00:07:08.191 end memzones------- 00:07:08.191 10:34:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:08.191 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:07:08.191 list of free elements. size: 10.862488 MiB 00:07:08.191 element at address: 0x200018a00000 with size: 0.999878 MiB 00:07:08.191 element at address: 0x200018c00000 with size: 0.999878 MiB 00:07:08.191 element at address: 0x200000400000 with size: 0.998535 MiB 00:07:08.191 element at address: 0x200031800000 with size: 0.994446 MiB 00:07:08.191 element at address: 0x200008000000 with size: 0.959839 MiB 00:07:08.191 element at address: 0x200012c00000 with size: 0.954285 MiB 00:07:08.191 element at address: 0x200018e00000 with size: 0.936584 MiB 00:07:08.191 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:08.191 element at address: 0x20001a600000 with size: 0.582886 MiB 00:07:08.191 element at address: 0x200000c00000 with size: 0.495422 MiB 00:07:08.191 element at address: 0x200003e00000 with size: 0.490723 MiB 00:07:08.191 element at address: 0x200019000000 with size: 0.485657 MiB 00:07:08.191 element at address: 0x200010600000 with size: 0.481934 MiB 00:07:08.191 element at address: 0x200027a00000 with size: 0.410034 MiB 00:07:08.191 element at address: 0x200000800000 with size: 0.355042 MiB 00:07:08.191 list of standard malloc elements. size: 199.218628 MiB 00:07:08.191 element at address: 0x2000081fff80 with size: 132.000122 MiB 00:07:08.191 element at address: 0x200003ffff80 with size: 64.000122 MiB 00:07:08.191 element at address: 0x200018afff80 with size: 1.000122 MiB 00:07:08.191 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:07:08.191 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:08.191 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:08.191 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:07:08.191 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:08.191 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:07:08.191 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:08.191 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:08.191 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:08.191 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:08.191 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:07:08.191 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:08.191 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:08.191 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:07:08.191 element at address: 0x20000085b040 with size: 0.000183 MiB 00:07:08.191 element at address: 0x20000085b100 with size: 0.000183 MiB 00:07:08.191 element at address: 0x2000008db3c0 with size: 0.000183 MiB 00:07:08.191 element at address: 0x2000008db5c0 with size: 0.000183 MiB 00:07:08.191 element at address: 0x2000008df880 with size: 0.000183 MiB 00:07:08.191 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:08.191 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:08.191 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:08.191 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:08.191 element at address: 0x200003e7da00 with size: 0.000183 MiB 00:07:08.191 element at address: 0x200003e7dac0 with size: 0.000183 MiB 00:07:08.191 element at address: 0x200003efdd80 with size: 0.000183 MiB 00:07:08.191 element at address: 0x2000080fdd80 with size: 0.000183 MiB 00:07:08.191 element at address: 0x20001067b600 with size: 0.000183 MiB 00:07:08.191 element at address: 0x20001067b6c0 with size: 0.000183 MiB 00:07:08.191 element at address: 0x2000106fb980 with size: 0.000183 MiB 00:07:08.191 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:07:08.191 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:07:08.191 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:07:08.191 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:07:08.191 element at address: 0x20001a695380 with size: 0.000183 MiB 00:07:08.191 element at address: 0x20001a695440 with size: 0.000183 MiB 00:07:08.191 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:07:08.191 element at address: 0x200027a69040 with size: 0.000183 MiB 00:07:08.191 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:07:08.191 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:07:08.191 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:07:08.191 list of memzone associated elements. size: 599.918884 MiB 00:07:08.191 element at address: 0x20001a695500 with size: 211.416748 MiB 00:07:08.191 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:08.191 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:07:08.191 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:08.191 element at address: 0x200012df4780 with size: 92.045044 MiB 00:07:08.191 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2852173_0 00:07:08.191 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:08.191 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2852173_0 00:07:08.191 element at address: 0x2000107fdb80 with size: 36.008911 MiB 00:07:08.191 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2852173_0 00:07:08.191 element at address: 0x2000191be940 with size: 20.255554 MiB 00:07:08.191 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:08.191 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:07:08.191 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:08.191 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:08.191 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2852173_0 00:07:08.191 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:08.191 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2852173 00:07:08.191 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:08.191 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2852173 00:07:08.191 element at address: 0x2000106fba40 with size: 1.008118 MiB 00:07:08.192 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:08.192 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:07:08.192 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:08.192 element at address: 0x2000080fde40 with size: 1.008118 MiB 00:07:08.192 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:08.192 element at address: 0x200003efde40 with size: 1.008118 MiB 00:07:08.192 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:08.192 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:08.192 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2852173 00:07:08.192 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:08.192 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2852173 00:07:08.192 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:07:08.192 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2852173 00:07:08.192 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:07:08.192 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2852173 00:07:08.192 element at address: 0x20000085b1c0 with size: 0.500488 MiB 00:07:08.192 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2852173 00:07:08.192 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:08.192 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2852173 00:07:08.192 element at address: 0x20001067b780 with size: 0.500488 MiB 00:07:08.192 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:08.192 element at address: 0x200003e7db80 with size: 0.500488 MiB 00:07:08.192 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:08.192 element at address: 0x20001907c540 with size: 0.250488 MiB 00:07:08.192 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:08.192 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:08.192 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2852173 00:07:08.192 element at address: 0x2000008df940 with size: 0.125488 MiB 00:07:08.192 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2852173 00:07:08.192 element at address: 0x2000080f5b80 with size: 0.031738 MiB 00:07:08.192 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:08.192 element at address: 0x200027a69100 with size: 0.023743 MiB 00:07:08.192 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:08.192 element at address: 0x2000008db680 with size: 0.016113 MiB 00:07:08.192 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2852173 00:07:08.192 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:07:08.192 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:08.192 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:07:08.192 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2852173 00:07:08.192 element at address: 0x2000008db480 with size: 0.000305 MiB 00:07:08.192 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2852173 00:07:08.192 element at address: 0x20000085af00 with size: 0.000305 MiB 00:07:08.192 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2852173 00:07:08.192 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:07:08.192 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:08.192 10:34:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:08.192 10:34:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2852173 00:07:08.192 10:34:34 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 2852173 ']' 00:07:08.192 10:34:34 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 2852173 00:07:08.192 10:34:34 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:07:08.192 10:34:34 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:08.192 10:34:34 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2852173 00:07:08.192 10:34:34 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:08.192 10:34:34 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:08.192 10:34:34 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2852173' 00:07:08.192 killing process with pid 2852173 00:07:08.192 10:34:34 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 2852173 00:07:08.192 10:34:34 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 2852173 00:07:08.761 00:07:08.761 real 0m1.722s 00:07:08.761 user 0m1.807s 00:07:08.761 sys 0m0.554s 00:07:08.761 10:34:34 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:08.761 10:34:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:08.761 ************************************ 00:07:08.761 END TEST dpdk_mem_utility 00:07:08.761 ************************************ 00:07:08.761 10:34:34 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:07:08.761 10:34:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:08.761 10:34:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:08.761 10:34:34 -- common/autotest_common.sh@10 -- # set +x 00:07:08.761 ************************************ 00:07:08.761 START TEST event 00:07:08.761 ************************************ 00:07:08.761 10:34:34 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:07:08.761 * Looking for test storage... 00:07:08.761 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:07:08.761 10:34:34 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:08.761 10:34:34 event -- common/autotest_common.sh@1691 -- # lcov --version 00:07:08.761 10:34:34 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:08.761 10:34:34 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:08.761 10:34:34 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.761 10:34:34 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.761 10:34:34 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.761 10:34:34 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.761 10:34:34 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.761 10:34:34 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.761 10:34:34 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.761 10:34:34 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.761 10:34:34 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.761 10:34:34 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.761 10:34:34 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.761 10:34:34 event -- scripts/common.sh@344 -- # case "$op" in 00:07:08.761 10:34:34 event -- scripts/common.sh@345 -- # : 1 00:07:08.761 10:34:34 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.761 10:34:34 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.761 10:34:34 event -- scripts/common.sh@365 -- # decimal 1 00:07:08.761 10:34:34 event -- scripts/common.sh@353 -- # local d=1 00:07:08.761 10:34:34 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.761 10:34:34 event -- scripts/common.sh@355 -- # echo 1 00:07:08.761 10:34:34 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.761 10:34:34 event -- scripts/common.sh@366 -- # decimal 2 00:07:08.761 10:34:34 event -- scripts/common.sh@353 -- # local d=2 00:07:08.761 10:34:34 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.761 10:34:34 event -- scripts/common.sh@355 -- # echo 2 00:07:08.761 10:34:34 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.761 10:34:34 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.761 10:34:34 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.761 10:34:34 event -- scripts/common.sh@368 -- # return 0 00:07:08.761 10:34:34 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.761 10:34:34 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:08.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.761 --rc genhtml_branch_coverage=1 00:07:08.761 --rc genhtml_function_coverage=1 00:07:08.762 --rc genhtml_legend=1 00:07:08.762 --rc geninfo_all_blocks=1 00:07:08.762 --rc geninfo_unexecuted_blocks=1 00:07:08.762 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:08.762 ' 00:07:08.762 10:34:34 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:08.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.762 --rc genhtml_branch_coverage=1 00:07:08.762 --rc genhtml_function_coverage=1 00:07:08.762 --rc genhtml_legend=1 00:07:08.762 --rc geninfo_all_blocks=1 00:07:08.762 --rc geninfo_unexecuted_blocks=1 00:07:08.762 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:08.762 ' 00:07:08.762 10:34:34 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:08.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.762 --rc genhtml_branch_coverage=1 00:07:08.762 --rc genhtml_function_coverage=1 00:07:08.762 --rc genhtml_legend=1 00:07:08.762 --rc geninfo_all_blocks=1 00:07:08.762 --rc geninfo_unexecuted_blocks=1 00:07:08.762 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:08.762 ' 00:07:08.762 10:34:34 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:08.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.762 --rc genhtml_branch_coverage=1 00:07:08.762 --rc genhtml_function_coverage=1 00:07:08.762 --rc genhtml_legend=1 00:07:08.762 --rc geninfo_all_blocks=1 00:07:08.762 --rc geninfo_unexecuted_blocks=1 00:07:08.762 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:08.762 ' 00:07:08.762 10:34:34 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:08.762 10:34:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:08.762 10:34:34 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:08.762 10:34:34 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:07:08.762 10:34:34 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:08.762 10:34:34 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.021 ************************************ 00:07:09.021 START TEST event_perf 00:07:09.021 ************************************ 00:07:09.021 10:34:34 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:09.021 Running I/O for 1 seconds...[2024-11-05 10:34:34.864067] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:09.021 [2024-11-05 10:34:34.864110] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852429 ] 00:07:09.021 [2024-11-05 10:34:34.968564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.021 [2024-11-05 10:34:35.027133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.021 [2024-11-05 10:34:35.027222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.021 [2024-11-05 10:34:35.027313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.021 [2024-11-05 10:34:35.027318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.397 Running I/O for 1 seconds... 00:07:10.397 lcore 0: 175641 00:07:10.397 lcore 1: 175638 00:07:10.397 lcore 2: 175639 00:07:10.397 lcore 3: 175640 00:07:10.397 done. 00:07:10.397 00:07:10.397 real 0m1.221s 00:07:10.397 user 0m4.114s 00:07:10.397 sys 0m0.102s 00:07:10.397 10:34:36 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:10.397 10:34:36 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:10.397 ************************************ 00:07:10.397 END TEST event_perf 00:07:10.397 ************************************ 00:07:10.397 10:34:36 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:10.397 10:34:36 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:10.397 10:34:36 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:10.397 10:34:36 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.397 ************************************ 00:07:10.397 START TEST event_reactor 00:07:10.397 ************************************ 00:07:10.397 10:34:36 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:10.397 [2024-11-05 10:34:36.169686] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:10.397 [2024-11-05 10:34:36.169774] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852626 ] 00:07:10.397 [2024-11-05 10:34:36.295148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.397 [2024-11-05 10:34:36.349934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.334 test_start 00:07:11.334 oneshot 00:07:11.334 tick 100 00:07:11.334 tick 100 00:07:11.334 tick 250 00:07:11.334 tick 100 00:07:11.334 tick 100 00:07:11.334 tick 100 00:07:11.334 tick 250 00:07:11.334 tick 500 00:07:11.334 tick 100 00:07:11.334 tick 100 00:07:11.334 tick 250 00:07:11.334 tick 100 00:07:11.334 tick 100 00:07:11.334 test_end 00:07:11.334 00:07:11.334 real 0m1.247s 00:07:11.334 user 0m1.119s 00:07:11.334 sys 0m0.122s 00:07:11.334 10:34:37 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:11.334 10:34:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:11.334 ************************************ 00:07:11.334 END TEST event_reactor 00:07:11.334 ************************************ 00:07:11.594 10:34:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:11.594 10:34:37 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:11.594 10:34:37 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:11.594 10:34:37 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.594 ************************************ 00:07:11.594 START TEST event_reactor_perf 00:07:11.594 ************************************ 00:07:11.594 10:34:37 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:11.594 [2024-11-05 10:34:37.499101] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:11.594 [2024-11-05 10:34:37.499189] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852824 ] 00:07:11.594 [2024-11-05 10:34:37.628541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.853 [2024-11-05 10:34:37.686893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.790 test_start 00:07:12.790 test_end 00:07:12.790 Performance: 609020 events per second 00:07:12.790 00:07:12.790 real 0m1.257s 00:07:12.790 user 0m1.120s 00:07:12.790 sys 0m0.131s 00:07:12.790 10:34:38 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:12.790 10:34:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:12.790 ************************************ 00:07:12.790 END TEST event_reactor_perf 00:07:12.790 ************************************ 00:07:12.790 10:34:38 event -- event/event.sh@49 -- # uname -s 00:07:12.790 10:34:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:12.790 10:34:38 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:12.790 10:34:38 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:12.790 10:34:38 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:12.790 10:34:38 event -- common/autotest_common.sh@10 -- # set +x 00:07:12.790 ************************************ 00:07:12.790 START TEST event_scheduler 00:07:12.790 ************************************ 00:07:12.790 10:34:38 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:13.049 * Looking for test storage... 00:07:13.049 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:07:13.049 10:34:38 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:13.049 10:34:38 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:07:13.049 10:34:38 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:13.049 10:34:38 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:13.049 10:34:38 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.049 10:34:38 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.049 10:34:38 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.049 10:34:38 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.049 10:34:38 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.049 10:34:38 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.049 10:34:38 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.049 10:34:38 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.049 10:34:38 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.049 10:34:38 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.049 10:34:38 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.049 10:34:38 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:13.049 10:34:38 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:13.049 10:34:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.049 10:34:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.049 10:34:39 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:13.049 10:34:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:13.049 10:34:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.050 10:34:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:13.050 10:34:39 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.050 10:34:39 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:13.050 10:34:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:13.050 10:34:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.050 10:34:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:13.050 10:34:39 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.050 10:34:39 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.050 10:34:39 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.050 10:34:39 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:13.050 10:34:39 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.050 10:34:39 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:13.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.050 --rc genhtml_branch_coverage=1 00:07:13.050 --rc genhtml_function_coverage=1 00:07:13.050 --rc genhtml_legend=1 00:07:13.050 --rc geninfo_all_blocks=1 00:07:13.050 --rc geninfo_unexecuted_blocks=1 00:07:13.050 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:13.050 ' 00:07:13.050 10:34:39 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:13.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.050 --rc genhtml_branch_coverage=1 00:07:13.050 --rc genhtml_function_coverage=1 00:07:13.050 --rc genhtml_legend=1 00:07:13.050 --rc geninfo_all_blocks=1 00:07:13.050 --rc geninfo_unexecuted_blocks=1 00:07:13.050 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:13.050 ' 00:07:13.050 10:34:39 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:13.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.050 --rc genhtml_branch_coverage=1 00:07:13.050 --rc genhtml_function_coverage=1 00:07:13.050 --rc genhtml_legend=1 00:07:13.050 --rc geninfo_all_blocks=1 00:07:13.050 --rc geninfo_unexecuted_blocks=1 00:07:13.050 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:13.050 ' 00:07:13.050 10:34:39 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:13.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.050 --rc genhtml_branch_coverage=1 00:07:13.050 --rc genhtml_function_coverage=1 00:07:13.050 --rc genhtml_legend=1 00:07:13.050 --rc geninfo_all_blocks=1 00:07:13.050 --rc geninfo_unexecuted_blocks=1 00:07:13.050 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:13.050 ' 00:07:13.050 10:34:39 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:13.050 10:34:39 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:13.050 10:34:39 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2853055 00:07:13.050 10:34:39 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:13.050 10:34:39 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2853055 00:07:13.050 10:34:39 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 2853055 ']' 00:07:13.050 10:34:39 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.050 10:34:39 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:13.050 10:34:39 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.050 10:34:39 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:13.050 10:34:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:13.050 [2024-11-05 10:34:39.026679] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:13.050 [2024-11-05 10:34:39.026755] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2853055 ] 00:07:13.050 [2024-11-05 10:34:39.105117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.309 [2024-11-05 10:34:39.154677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.309 [2024-11-05 10:34:39.154774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.309 [2024-11-05 10:34:39.154817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.309 [2024-11-05 10:34:39.154819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.309 10:34:39 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:13.309 10:34:39 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:07:13.309 10:34:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:13.309 10:34:39 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.309 10:34:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:13.309 [2024-11-05 10:34:39.231600] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:13.309 [2024-11-05 10:34:39.231620] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:13.309 [2024-11-05 10:34:39.231631] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:13.309 [2024-11-05 10:34:39.231639] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:13.309 [2024-11-05 10:34:39.231646] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:13.309 10:34:39 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.309 10:34:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:13.309 10:34:39 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.309 10:34:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:13.309 [2024-11-05 10:34:39.308408] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:13.309 10:34:39 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.309 10:34:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:13.309 10:34:39 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:13.309 10:34:39 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:13.309 10:34:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:13.309 ************************************ 00:07:13.309 START TEST scheduler_create_thread 00:07:13.309 ************************************ 00:07:13.309 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:07:13.309 10:34:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:13.309 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.309 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.309 2 00:07:13.309 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.309 10:34:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:13.309 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.309 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.309 3 00:07:13.309 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.309 10:34:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:13.309 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.309 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.569 4 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.569 5 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.569 6 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.569 7 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.569 8 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.569 9 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.569 10 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.569 10:34:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.505 10:34:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.505 10:34:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:14.505 10:34:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.505 10:34:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:15.882 10:34:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.882 10:34:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:15.882 10:34:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:15.882 10:34:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.882 10:34:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.819 10:34:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.819 00:07:16.819 real 0m3.384s 00:07:16.819 user 0m0.022s 00:07:16.819 sys 0m0.010s 00:07:16.819 10:34:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:16.819 10:34:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.819 ************************************ 00:07:16.819 END TEST scheduler_create_thread 00:07:16.819 ************************************ 00:07:16.819 10:34:42 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:16.819 10:34:42 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2853055 00:07:16.819 10:34:42 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 2853055 ']' 00:07:16.819 10:34:42 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 2853055 00:07:16.819 10:34:42 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:07:16.819 10:34:42 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:16.819 10:34:42 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2853055 00:07:16.819 10:34:42 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:16.819 10:34:42 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:16.819 10:34:42 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2853055' 00:07:16.819 killing process with pid 2853055 00:07:16.819 10:34:42 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 2853055 00:07:16.819 10:34:42 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 2853055 00:07:17.078 [2024-11-05 10:34:43.116499] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:17.337 00:07:17.337 real 0m4.519s 00:07:17.337 user 0m7.983s 00:07:17.337 sys 0m0.429s 00:07:17.337 10:34:43 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:17.337 10:34:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:17.337 ************************************ 00:07:17.337 END TEST event_scheduler 00:07:17.337 ************************************ 00:07:17.337 10:34:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:17.337 10:34:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:17.337 10:34:43 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:17.337 10:34:43 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:17.337 10:34:43 event -- common/autotest_common.sh@10 -- # set +x 00:07:17.337 ************************************ 00:07:17.337 START TEST app_repeat 00:07:17.337 ************************************ 00:07:17.337 10:34:43 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:07:17.337 10:34:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.337 10:34:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.337 10:34:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:17.337 10:34:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.337 10:34:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:17.337 10:34:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:17.337 10:34:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:17.595 10:34:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2853642 00:07:17.595 10:34:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:17.595 10:34:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2853642' 00:07:17.595 Process app_repeat pid: 2853642 00:07:17.595 10:34:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:17.595 10:34:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:17.595 spdk_app_start Round 0 00:07:17.595 10:34:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2853642 /var/tmp/spdk-nbd.sock 00:07:17.595 10:34:43 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2853642 ']' 00:07:17.595 10:34:43 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:17.595 10:34:43 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:17.595 10:34:43 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:17.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:17.595 10:34:43 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:17.595 10:34:43 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:17.595 10:34:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:17.595 [2024-11-05 10:34:43.436831] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:17.595 [2024-11-05 10:34:43.436929] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2853642 ] 00:07:17.595 [2024-11-05 10:34:43.563079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.595 [2024-11-05 10:34:43.624357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.595 [2024-11-05 10:34:43.624362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.527 10:34:44 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:18.527 10:34:44 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:18.527 10:34:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:18.785 Malloc0 00:07:18.785 10:34:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:19.042 Malloc1 00:07:19.042 10:34:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:19.042 10:34:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.042 10:34:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:19.042 10:34:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:19.042 10:34:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.042 10:34:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:19.042 10:34:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:19.042 10:34:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.042 10:34:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:19.042 10:34:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:19.042 10:34:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.042 10:34:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:19.042 10:34:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:19.042 10:34:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:19.042 10:34:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.042 10:34:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:19.300 /dev/nbd0 00:07:19.300 10:34:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:19.300 10:34:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:19.300 10:34:45 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:19.300 10:34:45 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:19.300 10:34:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:19.300 10:34:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:19.300 10:34:45 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:19.300 10:34:45 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:19.300 10:34:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:19.300 10:34:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:19.300 10:34:45 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:19.300 1+0 records in 00:07:19.300 1+0 records out 00:07:19.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000145696 s, 28.1 MB/s 00:07:19.300 10:34:45 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:19.300 10:34:45 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:19.300 10:34:45 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:19.300 10:34:45 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:19.300 10:34:45 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:19.300 10:34:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.300 10:34:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.300 10:34:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:19.558 /dev/nbd1 00:07:19.558 10:34:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:19.558 10:34:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:19.558 10:34:45 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:19.558 10:34:45 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:19.558 10:34:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:19.558 10:34:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:19.558 10:34:45 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:19.558 10:34:45 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:19.558 10:34:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:19.558 10:34:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:19.558 10:34:45 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:19.558 1+0 records in 00:07:19.558 1+0 records out 00:07:19.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298996 s, 13.7 MB/s 00:07:19.558 10:34:45 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:19.558 10:34:45 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:19.558 10:34:45 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:19.558 10:34:45 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:19.558 10:34:45 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:19.558 10:34:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.558 10:34:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.558 10:34:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.558 10:34:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.558 10:34:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.815 10:34:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:19.815 { 00:07:19.815 "nbd_device": "/dev/nbd0", 00:07:19.815 "bdev_name": "Malloc0" 00:07:19.815 }, 00:07:19.815 { 00:07:19.815 "nbd_device": "/dev/nbd1", 00:07:19.815 "bdev_name": "Malloc1" 00:07:19.815 } 00:07:19.815 ]' 00:07:19.815 10:34:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:19.815 { 00:07:19.815 "nbd_device": "/dev/nbd0", 00:07:19.815 "bdev_name": "Malloc0" 00:07:19.815 }, 00:07:19.815 { 00:07:19.815 "nbd_device": "/dev/nbd1", 00:07:19.815 "bdev_name": "Malloc1" 00:07:19.815 } 00:07:19.815 ]' 00:07:19.815 10:34:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:20.074 /dev/nbd1' 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:20.074 /dev/nbd1' 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:20.074 256+0 records in 00:07:20.074 256+0 records out 00:07:20.074 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111717 s, 93.9 MB/s 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:20.074 256+0 records in 00:07:20.074 256+0 records out 00:07:20.074 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028845 s, 36.4 MB/s 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:20.074 256+0 records in 00:07:20.074 256+0 records out 00:07:20.074 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0309868 s, 33.8 MB/s 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.074 10:34:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:20.074 10:34:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.074 10:34:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:20.074 10:34:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:20.074 10:34:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:20.074 10:34:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.074 10:34:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.074 10:34:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.074 10:34:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:20.074 10:34:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.074 10:34:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:20.332 10:34:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:20.332 10:34:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:20.332 10:34:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:20.332 10:34:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.332 10:34:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.332 10:34:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:20.332 10:34:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:20.332 10:34:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.332 10:34:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.332 10:34:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:20.590 10:34:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:20.590 10:34:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:20.590 10:34:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:20.590 10:34:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.590 10:34:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.590 10:34:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:20.590 10:34:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:20.590 10:34:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.590 10:34:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:20.590 10:34:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.590 10:34:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.849 10:34:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:20.849 10:34:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:20.849 10:34:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.849 10:34:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:21.108 10:34:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:21.108 10:34:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.108 10:34:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:21.108 10:34:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:21.108 10:34:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:21.108 10:34:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:21.108 10:34:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:21.108 10:34:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:21.108 10:34:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:21.367 10:34:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:21.367 [2024-11-05 10:34:47.430054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.625 [2024-11-05 10:34:47.485695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.625 [2024-11-05 10:34:47.485700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.625 [2024-11-05 10:34:47.536927] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:21.625 [2024-11-05 10:34:47.536979] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:24.157 10:34:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:24.157 10:34:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:24.157 spdk_app_start Round 1 00:07:24.157 10:34:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2853642 /var/tmp/spdk-nbd.sock 00:07:24.157 10:34:50 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2853642 ']' 00:07:24.157 10:34:50 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:24.157 10:34:50 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:24.157 10:34:50 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:24.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:24.157 10:34:50 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:24.157 10:34:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:24.415 10:34:50 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:24.415 10:34:50 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:24.415 10:34:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:24.674 Malloc0 00:07:24.674 10:34:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:24.932 Malloc1 00:07:24.932 10:34:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:24.932 10:34:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.932 10:34:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:24.932 10:34:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:24.933 10:34:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.933 10:34:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:24.933 10:34:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:24.933 10:34:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.933 10:34:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:24.933 10:34:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:24.933 10:34:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.933 10:34:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:24.933 10:34:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:24.933 10:34:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:24.933 10:34:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:24.933 10:34:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:25.191 /dev/nbd0 00:07:25.191 10:34:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:25.191 10:34:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:25.191 10:34:51 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:25.191 10:34:51 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:25.191 10:34:51 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:25.191 10:34:51 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:25.191 10:34:51 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:25.191 10:34:51 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:25.191 10:34:51 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:25.191 10:34:51 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:25.191 10:34:51 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:25.191 1+0 records in 00:07:25.191 1+0 records out 00:07:25.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024573 s, 16.7 MB/s 00:07:25.191 10:34:51 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:25.191 10:34:51 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:25.191 10:34:51 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:25.191 10:34:51 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:25.191 10:34:51 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:25.191 10:34:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.191 10:34:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.191 10:34:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:25.759 /dev/nbd1 00:07:25.759 10:34:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:25.759 10:34:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:25.759 10:34:51 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:25.759 10:34:51 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:25.759 10:34:51 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:25.759 10:34:51 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:25.759 10:34:51 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:25.759 10:34:51 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:25.759 10:34:51 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:25.759 10:34:51 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:25.759 10:34:51 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:25.759 1+0 records in 00:07:25.759 1+0 records out 00:07:25.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264153 s, 15.5 MB/s 00:07:25.759 10:34:51 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:25.759 10:34:51 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:25.759 10:34:51 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:25.759 10:34:51 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:25.759 10:34:51 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:25.759 10:34:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.759 10:34:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.759 10:34:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:25.759 10:34:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.759 10:34:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:26.018 { 00:07:26.018 "nbd_device": "/dev/nbd0", 00:07:26.018 "bdev_name": "Malloc0" 00:07:26.018 }, 00:07:26.018 { 00:07:26.018 "nbd_device": "/dev/nbd1", 00:07:26.018 "bdev_name": "Malloc1" 00:07:26.018 } 00:07:26.018 ]' 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:26.018 { 00:07:26.018 "nbd_device": "/dev/nbd0", 00:07:26.018 "bdev_name": "Malloc0" 00:07:26.018 }, 00:07:26.018 { 00:07:26.018 "nbd_device": "/dev/nbd1", 00:07:26.018 "bdev_name": "Malloc1" 00:07:26.018 } 00:07:26.018 ]' 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:26.018 /dev/nbd1' 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:26.018 /dev/nbd1' 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:26.018 256+0 records in 00:07:26.018 256+0 records out 00:07:26.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00684005 s, 153 MB/s 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:26.018 256+0 records in 00:07:26.018 256+0 records out 00:07:26.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025163 s, 41.7 MB/s 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:26.018 256+0 records in 00:07:26.018 256+0 records out 00:07:26.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291826 s, 35.9 MB/s 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.018 10:34:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:26.018 10:34:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.018 10:34:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:26.018 10:34:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:26.018 10:34:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:26.018 10:34:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.018 10:34:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.018 10:34:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:26.018 10:34:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:26.018 10:34:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.018 10:34:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:26.277 10:34:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:26.277 10:34:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:26.277 10:34:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:26.277 10:34:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.277 10:34:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.277 10:34:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:26.277 10:34:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:26.277 10:34:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.277 10:34:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.277 10:34:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:26.536 10:34:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:26.795 10:34:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:26.795 10:34:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:26.795 10:34:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.795 10:34:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.795 10:34:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:26.795 10:34:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:26.795 10:34:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.795 10:34:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:26.795 10:34:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.795 10:34:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:27.053 10:34:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:27.053 10:34:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:27.053 10:34:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:27.053 10:34:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:27.053 10:34:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:27.053 10:34:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.053 10:34:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:27.053 10:34:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:27.053 10:34:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:27.053 10:34:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:27.053 10:34:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:27.053 10:34:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:27.053 10:34:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:27.312 10:34:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:27.571 [2024-11-05 10:34:53.442388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:27.571 [2024-11-05 10:34:53.497212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.571 [2024-11-05 10:34:53.497217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.571 [2024-11-05 10:34:53.549372] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:27.571 [2024-11-05 10:34:53.549428] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:30.858 10:34:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:30.858 10:34:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:30.858 spdk_app_start Round 2 00:07:30.858 10:34:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2853642 /var/tmp/spdk-nbd.sock 00:07:30.858 10:34:56 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2853642 ']' 00:07:30.858 10:34:56 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:30.858 10:34:56 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:30.858 10:34:56 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:30.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:30.858 10:34:56 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:30.858 10:34:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:30.858 10:34:56 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:30.858 10:34:56 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:30.858 10:34:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:30.858 Malloc0 00:07:30.858 10:34:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:30.858 Malloc1 00:07:30.858 10:34:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:30.858 10:34:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.858 10:34:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:30.858 10:34:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:30.858 10:34:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.858 10:34:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:30.858 10:34:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:30.858 10:34:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.858 10:34:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:30.858 10:34:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:30.858 10:34:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.858 10:34:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:30.858 10:34:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:30.858 10:34:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:30.858 10:34:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:30.858 10:34:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:31.424 /dev/nbd0 00:07:31.424 10:34:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:31.424 10:34:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:31.424 10:34:57 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:31.424 10:34:57 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:31.424 10:34:57 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:31.424 10:34:57 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:31.424 10:34:57 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:31.424 10:34:57 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:31.424 10:34:57 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:31.424 10:34:57 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:31.424 10:34:57 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:31.424 1+0 records in 00:07:31.424 1+0 records out 00:07:31.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248075 s, 16.5 MB/s 00:07:31.424 10:34:57 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:31.424 10:34:57 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:31.424 10:34:57 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:31.424 10:34:57 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:31.424 10:34:57 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:31.424 10:34:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:31.424 10:34:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:31.424 10:34:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:31.682 /dev/nbd1 00:07:31.682 10:34:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:31.682 10:34:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:31.682 10:34:57 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:31.682 10:34:57 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:31.682 10:34:57 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:31.682 10:34:57 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:31.682 10:34:57 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:31.682 10:34:57 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:31.682 10:34:57 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:31.682 10:34:57 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:31.682 10:34:57 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:31.682 1+0 records in 00:07:31.682 1+0 records out 00:07:31.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271631 s, 15.1 MB/s 00:07:31.682 10:34:57 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:31.682 10:34:57 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:31.682 10:34:57 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:31.682 10:34:57 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:31.682 10:34:57 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:31.682 10:34:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:31.682 10:34:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:31.682 10:34:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:31.682 10:34:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.682 10:34:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:31.940 { 00:07:31.940 "nbd_device": "/dev/nbd0", 00:07:31.940 "bdev_name": "Malloc0" 00:07:31.940 }, 00:07:31.940 { 00:07:31.940 "nbd_device": "/dev/nbd1", 00:07:31.940 "bdev_name": "Malloc1" 00:07:31.940 } 00:07:31.940 ]' 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:31.940 { 00:07:31.940 "nbd_device": "/dev/nbd0", 00:07:31.940 "bdev_name": "Malloc0" 00:07:31.940 }, 00:07:31.940 { 00:07:31.940 "nbd_device": "/dev/nbd1", 00:07:31.940 "bdev_name": "Malloc1" 00:07:31.940 } 00:07:31.940 ]' 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:31.940 /dev/nbd1' 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:31.940 /dev/nbd1' 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:31.940 256+0 records in 00:07:31.940 256+0 records out 00:07:31.940 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107994 s, 97.1 MB/s 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:31.940 256+0 records in 00:07:31.940 256+0 records out 00:07:31.940 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198278 s, 52.9 MB/s 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:31.940 256+0 records in 00:07:31.940 256+0 records out 00:07:31.940 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0311264 s, 33.7 MB/s 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.940 10:34:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:31.940 10:34:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:31.940 10:34:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:31.940 10:34:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.940 10:34:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.940 10:34:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:31.940 10:34:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:31.940 10:34:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.940 10:34:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:32.507 10:34:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:32.507 10:34:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:32.507 10:34:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:32.507 10:34:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.507 10:34:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.507 10:34:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:32.507 10:34:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:32.507 10:34:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.507 10:34:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.507 10:34:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:32.765 10:34:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:32.765 10:34:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:32.765 10:34:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:32.765 10:34:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.765 10:34:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.765 10:34:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:32.765 10:34:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:32.765 10:34:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.765 10:34:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.765 10:34:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.765 10:34:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:33.023 10:34:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:33.023 10:34:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:33.023 10:34:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:33.023 10:34:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:33.023 10:34:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:33.023 10:34:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:33.023 10:34:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:33.023 10:34:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:33.023 10:34:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:33.023 10:34:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:33.023 10:34:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:33.023 10:34:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:33.023 10:34:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:33.281 10:34:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:33.539 [2024-11-05 10:34:59.430226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.539 [2024-11-05 10:34:59.483230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.539 [2024-11-05 10:34:59.483235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.539 [2024-11-05 10:34:59.527691] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:33.539 [2024-11-05 10:34:59.527747] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:36.820 10:35:02 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2853642 /var/tmp/spdk-nbd.sock 00:07:36.820 10:35:02 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2853642 ']' 00:07:36.820 10:35:02 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:36.820 10:35:02 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:36.820 10:35:02 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:36.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:36.820 10:35:02 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:36.820 10:35:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:36.820 10:35:02 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:36.820 10:35:02 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:36.820 10:35:02 event.app_repeat -- event/event.sh@39 -- # killprocess 2853642 00:07:36.820 10:35:02 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 2853642 ']' 00:07:36.820 10:35:02 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 2853642 00:07:36.820 10:35:02 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:07:36.820 10:35:02 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:36.820 10:35:02 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2853642 00:07:36.820 10:35:02 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:36.820 10:35:02 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:36.820 10:35:02 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2853642' 00:07:36.820 killing process with pid 2853642 00:07:36.821 10:35:02 event.app_repeat -- common/autotest_common.sh@971 -- # kill 2853642 00:07:36.821 10:35:02 event.app_repeat -- common/autotest_common.sh@976 -- # wait 2853642 00:07:36.821 spdk_app_start is called in Round 0. 00:07:36.821 Shutdown signal received, stop current app iteration 00:07:36.821 Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 reinitialization... 00:07:36.821 spdk_app_start is called in Round 1. 00:07:36.821 Shutdown signal received, stop current app iteration 00:07:36.821 Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 reinitialization... 00:07:36.821 spdk_app_start is called in Round 2. 00:07:36.821 Shutdown signal received, stop current app iteration 00:07:36.821 Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 reinitialization... 00:07:36.821 spdk_app_start is called in Round 3. 00:07:36.821 Shutdown signal received, stop current app iteration 00:07:36.821 10:35:02 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:36.821 10:35:02 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:36.821 00:07:36.821 real 0m19.326s 00:07:36.821 user 0m42.362s 00:07:36.821 sys 0m4.147s 00:07:36.821 10:35:02 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:36.821 10:35:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:36.821 ************************************ 00:07:36.821 END TEST app_repeat 00:07:36.821 ************************************ 00:07:36.821 10:35:02 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:36.821 10:35:02 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:36.821 10:35:02 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:36.821 10:35:02 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:36.821 10:35:02 event -- common/autotest_common.sh@10 -- # set +x 00:07:36.821 ************************************ 00:07:36.821 START TEST cpu_locks 00:07:36.821 ************************************ 00:07:36.821 10:35:02 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:36.821 * Looking for test storage... 00:07:36.821 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:07:36.821 10:35:02 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:36.821 10:35:02 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:07:36.821 10:35:02 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:37.079 10:35:02 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.079 10:35:02 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:37.079 10:35:02 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.079 10:35:02 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:37.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.079 --rc genhtml_branch_coverage=1 00:07:37.079 --rc genhtml_function_coverage=1 00:07:37.079 --rc genhtml_legend=1 00:07:37.079 --rc geninfo_all_blocks=1 00:07:37.079 --rc geninfo_unexecuted_blocks=1 00:07:37.079 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:37.079 ' 00:07:37.079 10:35:02 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:37.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.079 --rc genhtml_branch_coverage=1 00:07:37.079 --rc genhtml_function_coverage=1 00:07:37.079 --rc genhtml_legend=1 00:07:37.079 --rc geninfo_all_blocks=1 00:07:37.079 --rc geninfo_unexecuted_blocks=1 00:07:37.079 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:37.079 ' 00:07:37.079 10:35:02 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:37.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.079 --rc genhtml_branch_coverage=1 00:07:37.079 --rc genhtml_function_coverage=1 00:07:37.079 --rc genhtml_legend=1 00:07:37.079 --rc geninfo_all_blocks=1 00:07:37.079 --rc geninfo_unexecuted_blocks=1 00:07:37.079 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:37.079 ' 00:07:37.079 10:35:02 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:37.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.079 --rc genhtml_branch_coverage=1 00:07:37.079 --rc genhtml_function_coverage=1 00:07:37.079 --rc genhtml_legend=1 00:07:37.079 --rc geninfo_all_blocks=1 00:07:37.079 --rc geninfo_unexecuted_blocks=1 00:07:37.079 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:37.079 ' 00:07:37.079 10:35:02 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:37.079 10:35:02 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:37.079 10:35:02 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:37.079 10:35:02 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:37.079 10:35:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:37.079 10:35:02 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:37.079 10:35:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.079 ************************************ 00:07:37.079 START TEST default_locks 00:07:37.080 ************************************ 00:07:37.080 10:35:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:07:37.080 10:35:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2856494 00:07:37.080 10:35:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2856494 00:07:37.080 10:35:03 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2856494 ']' 00:07:37.080 10:35:03 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.080 10:35:03 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:37.080 10:35:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.080 10:35:03 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:37.080 10:35:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.080 10:35:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:37.080 [2024-11-05 10:35:03.045367] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:37.080 [2024-11-05 10:35:03.045428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2856494 ] 00:07:37.338 [2024-11-05 10:35:03.166573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.338 [2024-11-05 10:35:03.221844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.682 10:35:03 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:37.682 10:35:03 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:07:37.682 10:35:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2856494 00:07:37.682 10:35:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2856494 00:07:37.682 10:35:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:37.960 lslocks: write error 00:07:37.960 10:35:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2856494 00:07:37.960 10:35:03 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 2856494 ']' 00:07:37.960 10:35:03 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 2856494 00:07:37.960 10:35:03 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:07:37.960 10:35:03 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:37.960 10:35:03 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2856494 00:07:38.218 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:38.218 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:38.218 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2856494' 00:07:38.218 killing process with pid 2856494 00:07:38.218 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 2856494 00:07:38.218 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 2856494 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2856494 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2856494 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2856494 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2856494 ']' 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.479 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2856494) - No such process 00:07:38.479 ERROR: process (pid: 2856494) is no longer running 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:38.479 00:07:38.479 real 0m1.391s 00:07:38.479 user 0m1.398s 00:07:38.479 sys 0m0.641s 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:38.479 10:35:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.479 ************************************ 00:07:38.479 END TEST default_locks 00:07:38.479 ************************************ 00:07:38.479 10:35:04 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:38.479 10:35:04 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:38.479 10:35:04 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:38.479 10:35:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.479 ************************************ 00:07:38.479 START TEST default_locks_via_rpc 00:07:38.479 ************************************ 00:07:38.479 10:35:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:07:38.479 10:35:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2856707 00:07:38.479 10:35:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2856707 00:07:38.479 10:35:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:38.479 10:35:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2856707 ']' 00:07:38.479 10:35:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.479 10:35:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:38.479 10:35:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.479 10:35:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:38.479 10:35:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.479 [2024-11-05 10:35:04.506046] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:38.479 [2024-11-05 10:35:04.506129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2856707 ] 00:07:38.738 [2024-11-05 10:35:04.628398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.738 [2024-11-05 10:35:04.683862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.996 10:35:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:38.996 10:35:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:38.996 10:35:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:38.996 10:35:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.996 10:35:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.996 10:35:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.996 10:35:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:38.996 10:35:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:38.996 10:35:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:38.996 10:35:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:38.996 10:35:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:38.996 10:35:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.996 10:35:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.996 10:35:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.996 10:35:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2856707 00:07:38.996 10:35:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2856707 00:07:38.996 10:35:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:39.562 10:35:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2856707 00:07:39.562 10:35:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 2856707 ']' 00:07:39.562 10:35:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 2856707 00:07:39.562 10:35:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:07:39.562 10:35:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:39.562 10:35:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2856707 00:07:39.562 10:35:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:39.562 10:35:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:39.562 10:35:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2856707' 00:07:39.562 killing process with pid 2856707 00:07:39.562 10:35:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 2856707 00:07:39.562 10:35:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 2856707 00:07:39.820 00:07:39.820 real 0m1.388s 00:07:39.820 user 0m1.359s 00:07:39.820 sys 0m0.679s 00:07:39.820 10:35:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:39.820 10:35:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.820 ************************************ 00:07:39.820 END TEST default_locks_via_rpc 00:07:39.820 ************************************ 00:07:40.079 10:35:05 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:40.079 10:35:05 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:40.079 10:35:05 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:40.079 10:35:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.079 ************************************ 00:07:40.079 START TEST non_locking_app_on_locked_coremask 00:07:40.079 ************************************ 00:07:40.079 10:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:07:40.079 10:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2856910 00:07:40.080 10:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2856910 /var/tmp/spdk.sock 00:07:40.080 10:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:40.080 10:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2856910 ']' 00:07:40.080 10:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.080 10:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:40.080 10:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.080 10:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:40.080 10:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.080 [2024-11-05 10:35:05.974410] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:40.080 [2024-11-05 10:35:05.974472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2856910 ] 00:07:40.080 [2024-11-05 10:35:06.101186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.080 [2024-11-05 10:35:06.155833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.339 10:35:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:40.339 10:35:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:40.339 10:35:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2856981 00:07:40.339 10:35:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2856981 /var/tmp/spdk2.sock 00:07:40.339 10:35:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:40.339 10:35:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2856981 ']' 00:07:40.339 10:35:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:40.339 10:35:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:40.339 10:35:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:40.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:40.339 10:35:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:40.339 10:35:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.598 [2024-11-05 10:35:06.418935] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:40.598 [2024-11-05 10:35:06.419015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2856981 ] 00:07:40.598 [2024-11-05 10:35:06.571614] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:40.598 [2024-11-05 10:35:06.571656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.856 [2024-11-05 10:35:06.681334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.423 10:35:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:41.423 10:35:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:41.423 10:35:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2856910 00:07:41.423 10:35:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2856910 00:07:41.423 10:35:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:42.360 lslocks: write error 00:07:42.360 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2856910 00:07:42.360 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2856910 ']' 00:07:42.360 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2856910 00:07:42.360 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:42.360 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:42.360 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2856910 00:07:42.360 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:42.360 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:42.360 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2856910' 00:07:42.360 killing process with pid 2856910 00:07:42.360 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2856910 00:07:42.360 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2856910 00:07:42.929 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2856981 00:07:42.929 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2856981 ']' 00:07:42.929 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2856981 00:07:42.929 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:42.929 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:42.929 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2856981 00:07:42.929 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:42.929 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:42.929 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2856981' 00:07:42.929 killing process with pid 2856981 00:07:42.929 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2856981 00:07:42.929 10:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2856981 00:07:43.497 00:07:43.497 real 0m3.325s 00:07:43.497 user 0m3.497s 00:07:43.497 sys 0m1.256s 00:07:43.497 10:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:43.497 10:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.498 ************************************ 00:07:43.498 END TEST non_locking_app_on_locked_coremask 00:07:43.498 ************************************ 00:07:43.498 10:35:09 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:43.498 10:35:09 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:43.498 10:35:09 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:43.498 10:35:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.498 ************************************ 00:07:43.498 START TEST locking_app_on_unlocked_coremask 00:07:43.498 ************************************ 00:07:43.498 10:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:07:43.498 10:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2857485 00:07:43.498 10:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2857485 /var/tmp/spdk.sock 00:07:43.498 10:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:43.498 10:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2857485 ']' 00:07:43.498 10:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.498 10:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:43.498 10:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.498 10:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:43.498 10:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.498 [2024-11-05 10:35:09.383362] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:43.498 [2024-11-05 10:35:09.383432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2857485 ] 00:07:43.498 [2024-11-05 10:35:09.512025] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:43.498 [2024-11-05 10:35:09.512067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.498 [2024-11-05 10:35:09.567938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.442 10:35:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:44.442 10:35:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:44.442 10:35:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:44.442 10:35:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2857504 00:07:44.442 10:35:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2857504 /var/tmp/spdk2.sock 00:07:44.442 10:35:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2857504 ']' 00:07:44.442 10:35:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:44.442 10:35:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:44.442 10:35:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:44.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:44.442 10:35:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:44.442 10:35:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.442 [2024-11-05 10:35:10.271359] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:44.442 [2024-11-05 10:35:10.271448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2857504 ] 00:07:44.442 [2024-11-05 10:35:10.427808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.701 [2024-11-05 10:35:10.534056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.268 10:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:45.268 10:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:45.268 10:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2857504 00:07:45.268 10:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2857504 00:07:45.268 10:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:46.203 lslocks: write error 00:07:46.203 10:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2857485 00:07:46.203 10:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2857485 ']' 00:07:46.203 10:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2857485 00:07:46.203 10:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:46.203 10:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:46.203 10:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2857485 00:07:46.203 10:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:46.203 10:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:46.203 10:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2857485' 00:07:46.203 killing process with pid 2857485 00:07:46.203 10:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2857485 00:07:46.203 10:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2857485 00:07:46.770 10:35:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2857504 00:07:46.770 10:35:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2857504 ']' 00:07:46.770 10:35:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2857504 00:07:46.770 10:35:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:46.770 10:35:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:46.770 10:35:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2857504 00:07:46.770 10:35:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:46.770 10:35:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:46.770 10:35:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2857504' 00:07:46.770 killing process with pid 2857504 00:07:46.770 10:35:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2857504 00:07:46.770 10:35:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2857504 00:07:47.029 00:07:47.029 real 0m3.713s 00:07:47.029 user 0m3.961s 00:07:47.029 sys 0m1.308s 00:07:47.029 10:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:47.029 10:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.029 ************************************ 00:07:47.029 END TEST locking_app_on_unlocked_coremask 00:07:47.029 ************************************ 00:07:47.287 10:35:13 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:47.287 10:35:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:47.287 10:35:13 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:47.287 10:35:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:47.287 ************************************ 00:07:47.287 START TEST locking_app_on_locked_coremask 00:07:47.287 ************************************ 00:07:47.287 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:07:47.287 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2857970 00:07:47.287 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2857970 /var/tmp/spdk.sock 00:07:47.287 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:47.287 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2857970 ']' 00:07:47.287 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.287 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:47.287 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.287 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:47.287 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.287 [2024-11-05 10:35:13.179827] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:47.287 [2024-11-05 10:35:13.179897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2857970 ] 00:07:47.287 [2024-11-05 10:35:13.306587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.287 [2024-11-05 10:35:13.359856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.546 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:47.546 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:47.546 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2858057 00:07:47.546 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2858057 /var/tmp/spdk2.sock 00:07:47.546 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:47.546 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:47.546 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2858057 /var/tmp/spdk2.sock 00:07:47.546 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:47.546 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.546 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:47.546 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.546 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2858057 /var/tmp/spdk2.sock 00:07:47.546 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2858057 ']' 00:07:47.546 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:47.546 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:47.546 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:47.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:47.546 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:47.547 10:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.547 [2024-11-05 10:35:13.616302] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:47.547 [2024-11-05 10:35:13.616382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858057 ] 00:07:47.805 [2024-11-05 10:35:13.786762] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2857970 has claimed it. 00:07:47.805 [2024-11-05 10:35:13.786813] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:48.371 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2858057) - No such process 00:07:48.371 ERROR: process (pid: 2858057) is no longer running 00:07:48.371 10:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:48.371 10:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:48.371 10:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:48.371 10:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.371 10:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:48.371 10:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.371 10:35:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2857970 00:07:48.371 10:35:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2857970 00:07:48.371 10:35:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:49.306 lslocks: write error 00:07:49.306 10:35:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2857970 00:07:49.306 10:35:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2857970 ']' 00:07:49.306 10:35:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2857970 00:07:49.306 10:35:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:49.306 10:35:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:49.306 10:35:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2857970 00:07:49.306 10:35:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:49.306 10:35:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:49.306 10:35:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2857970' 00:07:49.306 killing process with pid 2857970 00:07:49.306 10:35:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2857970 00:07:49.306 10:35:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2857970 00:07:49.566 00:07:49.566 real 0m2.321s 00:07:49.566 user 0m2.437s 00:07:49.566 sys 0m0.904s 00:07:49.566 10:35:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:49.566 10:35:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.566 ************************************ 00:07:49.566 END TEST locking_app_on_locked_coremask 00:07:49.566 ************************************ 00:07:49.566 10:35:15 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:49.566 10:35:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:49.566 10:35:15 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:49.566 10:35:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.566 ************************************ 00:07:49.566 START TEST locking_overlapped_coremask 00:07:49.566 ************************************ 00:07:49.566 10:35:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:07:49.566 10:35:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2858270 00:07:49.566 10:35:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2858270 /var/tmp/spdk.sock 00:07:49.566 10:35:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:49.566 10:35:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2858270 ']' 00:07:49.566 10:35:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.566 10:35:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:49.566 10:35:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.566 10:35:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:49.566 10:35:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.566 [2024-11-05 10:35:15.557304] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:49.566 [2024-11-05 10:35:15.557348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858270 ] 00:07:49.825 [2024-11-05 10:35:15.666001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:49.825 [2024-11-05 10:35:15.728605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.825 [2024-11-05 10:35:15.728695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.825 [2024-11-05 10:35:15.728700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.391 10:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:50.391 10:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:50.391 10:35:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2858446 00:07:50.391 10:35:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2858446 /var/tmp/spdk2.sock 00:07:50.391 10:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:50.391 10:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2858446 /var/tmp/spdk2.sock 00:07:50.391 10:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:50.391 10:35:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:50.391 10:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.391 10:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:50.391 10:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.391 10:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2858446 /var/tmp/spdk2.sock 00:07:50.391 10:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2858446 ']' 00:07:50.392 10:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:50.392 10:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:50.392 10:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:50.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:50.392 10:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:50.392 10:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.392 [2024-11-05 10:35:16.458756] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:50.392 [2024-11-05 10:35:16.458826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858446 ] 00:07:50.650 [2024-11-05 10:35:16.577101] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2858270 has claimed it. 00:07:50.650 [2024-11-05 10:35:16.577137] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:51.215 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2858446) - No such process 00:07:51.215 ERROR: process (pid: 2858446) is no longer running 00:07:51.215 10:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:51.215 10:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:51.215 10:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:51.215 10:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:51.215 10:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:51.215 10:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:51.215 10:35:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:51.216 10:35:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:51.216 10:35:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:51.216 10:35:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:51.216 10:35:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2858270 00:07:51.216 10:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 2858270 ']' 00:07:51.216 10:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 2858270 00:07:51.216 10:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:07:51.216 10:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:51.216 10:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2858270 00:07:51.216 10:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:51.216 10:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:51.216 10:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2858270' 00:07:51.216 killing process with pid 2858270 00:07:51.216 10:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 2858270 00:07:51.216 10:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 2858270 00:07:51.782 00:07:51.782 real 0m2.060s 00:07:51.782 user 0m5.903s 00:07:51.782 sys 0m0.537s 00:07:51.782 10:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:51.782 10:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.782 ************************************ 00:07:51.782 END TEST locking_overlapped_coremask 00:07:51.782 ************************************ 00:07:51.782 10:35:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:51.782 10:35:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:51.782 10:35:17 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:51.782 10:35:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.782 ************************************ 00:07:51.782 START TEST locking_overlapped_coremask_via_rpc 00:07:51.782 ************************************ 00:07:51.782 10:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:07:51.782 10:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2858648 00:07:51.782 10:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2858648 /var/tmp/spdk.sock 00:07:51.782 10:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:51.782 10:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2858648 ']' 00:07:51.782 10:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.782 10:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:51.782 10:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.782 10:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:51.782 10:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.782 [2024-11-05 10:35:17.713931] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:51.782 [2024-11-05 10:35:17.714011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858648 ] 00:07:51.782 [2024-11-05 10:35:17.825615] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:51.782 [2024-11-05 10:35:17.825656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:52.041 [2024-11-05 10:35:17.884651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.041 [2024-11-05 10:35:17.884747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.041 [2024-11-05 10:35:17.884753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.607 10:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:52.607 10:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:52.607 10:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2858800 00:07:52.607 10:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:52.607 10:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2858800 /var/tmp/spdk2.sock 00:07:52.607 10:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2858800 ']' 00:07:52.607 10:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:52.607 10:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:52.607 10:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:52.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:52.607 10:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:52.607 10:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.607 [2024-11-05 10:35:18.616791] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:52.607 [2024-11-05 10:35:18.616883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858800 ] 00:07:52.867 [2024-11-05 10:35:18.738945] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:52.867 [2024-11-05 10:35:18.738981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:52.867 [2024-11-05 10:35:18.840566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.867 [2024-11-05 10:35:18.840654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:52.867 [2024-11-05 10:35:18.840655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.435 [2024-11-05 10:35:19.306779] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2858648 has claimed it. 00:07:53.435 request: 00:07:53.435 { 00:07:53.435 "method": "framework_enable_cpumask_locks", 00:07:53.435 "req_id": 1 00:07:53.435 } 00:07:53.435 Got JSON-RPC error response 00:07:53.435 response: 00:07:53.435 { 00:07:53.435 "code": -32603, 00:07:53.435 "message": "Failed to claim CPU core: 2" 00:07:53.435 } 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2858648 /var/tmp/spdk.sock 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2858648 ']' 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:53.435 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.699 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:53.699 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:53.699 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2858800 /var/tmp/spdk2.sock 00:07:53.699 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2858800 ']' 00:07:53.699 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:53.699 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:53.699 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:53.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:53.699 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:53.699 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.699 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:53.699 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:53.699 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:53.699 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:53.699 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:53.699 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:53.699 00:07:53.699 real 0m2.044s 00:07:53.699 user 0m0.902s 00:07:53.699 sys 0m0.171s 00:07:53.699 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:53.699 10:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.699 ************************************ 00:07:53.699 END TEST locking_overlapped_coremask_via_rpc 00:07:53.699 ************************************ 00:07:53.699 10:35:19 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:53.699 10:35:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2858648 ]] 00:07:53.699 10:35:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2858648 00:07:53.699 10:35:19 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2858648 ']' 00:07:53.699 10:35:19 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2858648 00:07:53.699 10:35:19 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:53.963 10:35:19 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:53.963 10:35:19 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2858648 00:07:53.963 10:35:19 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:53.963 10:35:19 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:53.963 10:35:19 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2858648' 00:07:53.963 killing process with pid 2858648 00:07:53.963 10:35:19 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2858648 00:07:53.963 10:35:19 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2858648 00:07:54.222 10:35:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2858800 ]] 00:07:54.222 10:35:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2858800 00:07:54.222 10:35:20 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2858800 ']' 00:07:54.222 10:35:20 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2858800 00:07:54.222 10:35:20 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:54.222 10:35:20 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:54.222 10:35:20 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2858800 00:07:54.222 10:35:20 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:54.222 10:35:20 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:54.222 10:35:20 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2858800' 00:07:54.222 killing process with pid 2858800 00:07:54.222 10:35:20 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2858800 00:07:54.222 10:35:20 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2858800 00:07:54.790 10:35:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:54.790 10:35:20 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:54.790 10:35:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2858648 ]] 00:07:54.790 10:35:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2858648 00:07:54.790 10:35:20 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2858648 ']' 00:07:54.790 10:35:20 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2858648 00:07:54.790 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2858648) - No such process 00:07:54.790 10:35:20 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2858648 is not found' 00:07:54.790 Process with pid 2858648 is not found 00:07:54.790 10:35:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2858800 ]] 00:07:54.790 10:35:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2858800 00:07:54.790 10:35:20 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2858800 ']' 00:07:54.790 10:35:20 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2858800 00:07:54.790 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2858800) - No such process 00:07:54.790 10:35:20 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2858800 is not found' 00:07:54.790 Process with pid 2858800 is not found 00:07:54.790 10:35:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:54.790 00:07:54.790 real 0m17.762s 00:07:54.790 user 0m29.939s 00:07:54.790 sys 0m6.670s 00:07:54.790 10:35:20 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:54.790 10:35:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:54.790 ************************************ 00:07:54.790 END TEST cpu_locks 00:07:54.790 ************************************ 00:07:54.790 00:07:54.790 real 0m45.992s 00:07:54.790 user 1m26.918s 00:07:54.790 sys 0m12.021s 00:07:54.790 10:35:20 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:54.790 10:35:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:54.790 ************************************ 00:07:54.790 END TEST event 00:07:54.790 ************************************ 00:07:54.790 10:35:20 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:07:54.790 10:35:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:54.790 10:35:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:54.790 10:35:20 -- common/autotest_common.sh@10 -- # set +x 00:07:54.790 ************************************ 00:07:54.790 START TEST thread 00:07:54.790 ************************************ 00:07:54.790 10:35:20 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:07:54.790 * Looking for test storage... 00:07:54.790 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:07:54.790 10:35:20 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:54.790 10:35:20 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:07:54.790 10:35:20 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:55.049 10:35:20 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:55.049 10:35:20 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.049 10:35:20 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.049 10:35:20 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.049 10:35:20 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.049 10:35:20 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.049 10:35:20 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.049 10:35:20 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.049 10:35:20 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.049 10:35:20 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.049 10:35:20 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.049 10:35:20 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.049 10:35:20 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:55.049 10:35:20 thread -- scripts/common.sh@345 -- # : 1 00:07:55.049 10:35:20 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.049 10:35:20 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.049 10:35:20 thread -- scripts/common.sh@365 -- # decimal 1 00:07:55.049 10:35:20 thread -- scripts/common.sh@353 -- # local d=1 00:07:55.049 10:35:20 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.049 10:35:20 thread -- scripts/common.sh@355 -- # echo 1 00:07:55.049 10:35:20 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.049 10:35:20 thread -- scripts/common.sh@366 -- # decimal 2 00:07:55.049 10:35:20 thread -- scripts/common.sh@353 -- # local d=2 00:07:55.049 10:35:20 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.049 10:35:20 thread -- scripts/common.sh@355 -- # echo 2 00:07:55.049 10:35:20 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.049 10:35:20 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.049 10:35:20 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.049 10:35:20 thread -- scripts/common.sh@368 -- # return 0 00:07:55.049 10:35:20 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.049 10:35:20 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:55.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.049 --rc genhtml_branch_coverage=1 00:07:55.049 --rc genhtml_function_coverage=1 00:07:55.049 --rc genhtml_legend=1 00:07:55.049 --rc geninfo_all_blocks=1 00:07:55.049 --rc geninfo_unexecuted_blocks=1 00:07:55.049 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:55.049 ' 00:07:55.049 10:35:20 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:55.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.049 --rc genhtml_branch_coverage=1 00:07:55.049 --rc genhtml_function_coverage=1 00:07:55.049 --rc genhtml_legend=1 00:07:55.049 --rc geninfo_all_blocks=1 00:07:55.049 --rc geninfo_unexecuted_blocks=1 00:07:55.050 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:55.050 ' 00:07:55.050 10:35:20 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:55.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.050 --rc genhtml_branch_coverage=1 00:07:55.050 --rc genhtml_function_coverage=1 00:07:55.050 --rc genhtml_legend=1 00:07:55.050 --rc geninfo_all_blocks=1 00:07:55.050 --rc geninfo_unexecuted_blocks=1 00:07:55.050 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:55.050 ' 00:07:55.050 10:35:20 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:55.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.050 --rc genhtml_branch_coverage=1 00:07:55.050 --rc genhtml_function_coverage=1 00:07:55.050 --rc genhtml_legend=1 00:07:55.050 --rc geninfo_all_blocks=1 00:07:55.050 --rc geninfo_unexecuted_blocks=1 00:07:55.050 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:55.050 ' 00:07:55.050 10:35:20 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:55.050 10:35:20 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:55.050 10:35:20 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:55.050 10:35:20 thread -- common/autotest_common.sh@10 -- # set +x 00:07:55.050 ************************************ 00:07:55.050 START TEST thread_poller_perf 00:07:55.050 ************************************ 00:07:55.050 10:35:20 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:55.050 [2024-11-05 10:35:20.943385] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:55.050 [2024-11-05 10:35:20.943472] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859113 ] 00:07:55.050 [2024-11-05 10:35:21.069422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.050 [2024-11-05 10:35:21.123501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.050 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:56.426 [2024-11-05T09:35:22.503Z] ====================================== 00:07:56.426 [2024-11-05T09:35:22.503Z] busy:2308417678 (cyc) 00:07:56.426 [2024-11-05T09:35:22.503Z] total_run_count: 526000 00:07:56.426 [2024-11-05T09:35:22.503Z] tsc_hz: 2300000000 (cyc) 00:07:56.426 [2024-11-05T09:35:22.503Z] ====================================== 00:07:56.426 [2024-11-05T09:35:22.503Z] poller_cost: 4388 (cyc), 1907 (nsec) 00:07:56.426 00:07:56.426 real 0m1.254s 00:07:56.426 user 0m1.128s 00:07:56.426 sys 0m0.120s 00:07:56.426 10:35:22 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:56.426 10:35:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:56.426 ************************************ 00:07:56.426 END TEST thread_poller_perf 00:07:56.426 ************************************ 00:07:56.426 10:35:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:56.426 10:35:22 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:56.426 10:35:22 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:56.426 10:35:22 thread -- common/autotest_common.sh@10 -- # set +x 00:07:56.426 ************************************ 00:07:56.426 START TEST thread_poller_perf 00:07:56.426 ************************************ 00:07:56.426 10:35:22 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:56.426 [2024-11-05 10:35:22.270835] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:56.426 [2024-11-05 10:35:22.270933] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859311 ] 00:07:56.426 [2024-11-05 10:35:22.394735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.426 [2024-11-05 10:35:22.450531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.426 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:57.799 [2024-11-05T09:35:23.876Z] ====================================== 00:07:57.799 [2024-11-05T09:35:23.876Z] busy:2301842902 (cyc) 00:07:57.799 [2024-11-05T09:35:23.876Z] total_run_count: 8250000 00:07:57.799 [2024-11-05T09:35:23.876Z] tsc_hz: 2300000000 (cyc) 00:07:57.799 [2024-11-05T09:35:23.876Z] ====================================== 00:07:57.799 [2024-11-05T09:35:23.876Z] poller_cost: 279 (cyc), 121 (nsec) 00:07:57.799 00:07:57.799 real 0m1.247s 00:07:57.799 user 0m1.115s 00:07:57.799 sys 0m0.126s 00:07:57.799 10:35:23 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:57.799 10:35:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:57.799 ************************************ 00:07:57.799 END TEST thread_poller_perf 00:07:57.799 ************************************ 00:07:57.799 10:35:23 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:07:57.799 10:35:23 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:57.800 10:35:23 thread -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:57.800 10:35:23 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:57.800 10:35:23 thread -- common/autotest_common.sh@10 -- # set +x 00:07:57.800 ************************************ 00:07:57.800 START TEST thread_spdk_lock 00:07:57.800 ************************************ 00:07:57.800 10:35:23 thread.thread_spdk_lock -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:57.800 [2024-11-05 10:35:23.592042] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:57.800 [2024-11-05 10:35:23.592142] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859509 ] 00:07:57.800 [2024-11-05 10:35:23.717070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:57.800 [2024-11-05 10:35:23.774593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.800 [2024-11-05 10:35:23.774597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.366 [2024-11-05 10:35:24.277606] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:58.366 [2024-11-05 10:35:24.277653] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3112:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:58.366 [2024-11-05 10:35:24.277669] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x14d2c80 00:07:58.366 [2024-11-05 10:35:24.278594] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:58.366 [2024-11-05 10:35:24.278700] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:58.366 [2024-11-05 10:35:24.278736] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:58.366 Starting test contend 00:07:58.366 Worker Delay Wait us Hold us Total us 00:07:58.366 0 3 149478 191732 341211 00:07:58.366 1 5 82163 289040 371204 00:07:58.366 PASS test contend 00:07:58.366 Starting test hold_by_poller 00:07:58.366 PASS test hold_by_poller 00:07:58.366 Starting test hold_by_message 00:07:58.366 PASS test hold_by_message 00:07:58.366 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:07:58.366 100014 assertions passed 00:07:58.366 0 assertions failed 00:07:58.366 00:07:58.366 real 0m0.752s 00:07:58.366 user 0m1.125s 00:07:58.366 sys 0m0.126s 00:07:58.366 10:35:24 thread.thread_spdk_lock -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.366 10:35:24 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:07:58.366 ************************************ 00:07:58.366 END TEST thread_spdk_lock 00:07:58.366 ************************************ 00:07:58.366 00:07:58.366 real 0m3.670s 00:07:58.366 user 0m3.540s 00:07:58.366 sys 0m0.646s 00:07:58.366 10:35:24 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.366 10:35:24 thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.366 ************************************ 00:07:58.366 END TEST thread 00:07:58.366 ************************************ 00:07:58.366 10:35:24 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:58.366 10:35:24 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:58.366 10:35:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:58.366 10:35:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.366 10:35:24 -- common/autotest_common.sh@10 -- # set +x 00:07:58.366 ************************************ 00:07:58.366 START TEST app_cmdline 00:07:58.366 ************************************ 00:07:58.624 10:35:24 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:58.624 * Looking for test storage... 00:07:58.624 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:58.624 10:35:24 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:58.624 10:35:24 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:58.624 10:35:24 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:58.624 10:35:24 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:58.624 10:35:24 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.624 10:35:24 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.625 10:35:24 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:58.625 10:35:24 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.625 10:35:24 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:58.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.625 --rc genhtml_branch_coverage=1 00:07:58.625 --rc genhtml_function_coverage=1 00:07:58.625 --rc genhtml_legend=1 00:07:58.625 --rc geninfo_all_blocks=1 00:07:58.625 --rc geninfo_unexecuted_blocks=1 00:07:58.625 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:58.625 ' 00:07:58.625 10:35:24 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:58.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.625 --rc genhtml_branch_coverage=1 00:07:58.625 --rc genhtml_function_coverage=1 00:07:58.625 --rc genhtml_legend=1 00:07:58.625 --rc geninfo_all_blocks=1 00:07:58.625 --rc geninfo_unexecuted_blocks=1 00:07:58.625 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:58.625 ' 00:07:58.625 10:35:24 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:58.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.625 --rc genhtml_branch_coverage=1 00:07:58.625 --rc genhtml_function_coverage=1 00:07:58.625 --rc genhtml_legend=1 00:07:58.625 --rc geninfo_all_blocks=1 00:07:58.625 --rc geninfo_unexecuted_blocks=1 00:07:58.625 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:58.625 ' 00:07:58.625 10:35:24 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:58.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.625 --rc genhtml_branch_coverage=1 00:07:58.625 --rc genhtml_function_coverage=1 00:07:58.625 --rc genhtml_legend=1 00:07:58.625 --rc geninfo_all_blocks=1 00:07:58.625 --rc geninfo_unexecuted_blocks=1 00:07:58.625 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:58.625 ' 00:07:58.625 10:35:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:58.625 10:35:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2859752 00:07:58.625 10:35:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2859752 00:07:58.625 10:35:24 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:58.625 10:35:24 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 2859752 ']' 00:07:58.625 10:35:24 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.625 10:35:24 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:58.625 10:35:24 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.625 10:35:24 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:58.625 10:35:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:58.625 [2024-11-05 10:35:24.669595] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:07:58.625 [2024-11-05 10:35:24.669668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859752 ] 00:07:58.883 [2024-11-05 10:35:24.777126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.883 [2024-11-05 10:35:24.830497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.141 10:35:25 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:59.141 10:35:25 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:07:59.141 10:35:25 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:59.399 { 00:07:59.399 "version": "SPDK v25.01-pre git sha1 2f35f3599", 00:07:59.399 "fields": { 00:07:59.399 "major": 25, 00:07:59.399 "minor": 1, 00:07:59.399 "patch": 0, 00:07:59.399 "suffix": "-pre", 00:07:59.399 "commit": "2f35f3599" 00:07:59.399 } 00:07:59.399 } 00:07:59.399 10:35:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:59.399 10:35:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:59.399 10:35:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:59.399 10:35:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:59.399 10:35:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:59.399 10:35:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:59.399 10:35:25 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.399 10:35:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:59.399 10:35:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:59.399 10:35:25 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.399 10:35:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:59.399 10:35:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:59.399 10:35:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:59.399 10:35:25 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:59.399 10:35:25 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:59.399 10:35:25 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:59.399 10:35:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.399 10:35:25 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:59.399 10:35:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.399 10:35:25 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:59.399 10:35:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.399 10:35:25 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:59.399 10:35:25 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:07:59.399 10:35:25 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:59.657 request: 00:07:59.657 { 00:07:59.657 "method": "env_dpdk_get_mem_stats", 00:07:59.657 "req_id": 1 00:07:59.657 } 00:07:59.657 Got JSON-RPC error response 00:07:59.657 response: 00:07:59.657 { 00:07:59.657 "code": -32601, 00:07:59.657 "message": "Method not found" 00:07:59.657 } 00:07:59.657 10:35:25 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:59.657 10:35:25 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:59.657 10:35:25 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:59.657 10:35:25 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:59.657 10:35:25 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2859752 00:07:59.657 10:35:25 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 2859752 ']' 00:07:59.657 10:35:25 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 2859752 00:07:59.657 10:35:25 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:07:59.657 10:35:25 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:59.657 10:35:25 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2859752 00:07:59.657 10:35:25 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:59.657 10:35:25 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:59.657 10:35:25 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2859752' 00:07:59.657 killing process with pid 2859752 00:07:59.657 10:35:25 app_cmdline -- common/autotest_common.sh@971 -- # kill 2859752 00:07:59.657 10:35:25 app_cmdline -- common/autotest_common.sh@976 -- # wait 2859752 00:08:00.223 00:08:00.223 real 0m1.605s 00:08:00.223 user 0m1.941s 00:08:00.223 sys 0m0.560s 00:08:00.223 10:35:26 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:00.223 10:35:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:00.223 ************************************ 00:08:00.223 END TEST app_cmdline 00:08:00.223 ************************************ 00:08:00.223 10:35:26 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:08:00.223 10:35:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:00.223 10:35:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:00.223 10:35:26 -- common/autotest_common.sh@10 -- # set +x 00:08:00.223 ************************************ 00:08:00.223 START TEST version 00:08:00.223 ************************************ 00:08:00.223 10:35:26 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:08:00.223 * Looking for test storage... 00:08:00.223 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:00.223 10:35:26 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:00.223 10:35:26 version -- common/autotest_common.sh@1691 -- # lcov --version 00:08:00.223 10:35:26 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:00.223 10:35:26 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:00.223 10:35:26 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.223 10:35:26 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.223 10:35:26 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.223 10:35:26 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.223 10:35:26 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.223 10:35:26 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.223 10:35:26 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.223 10:35:26 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.223 10:35:26 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.223 10:35:26 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.223 10:35:26 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.223 10:35:26 version -- scripts/common.sh@344 -- # case "$op" in 00:08:00.223 10:35:26 version -- scripts/common.sh@345 -- # : 1 00:08:00.223 10:35:26 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.223 10:35:26 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.223 10:35:26 version -- scripts/common.sh@365 -- # decimal 1 00:08:00.223 10:35:26 version -- scripts/common.sh@353 -- # local d=1 00:08:00.223 10:35:26 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.223 10:35:26 version -- scripts/common.sh@355 -- # echo 1 00:08:00.223 10:35:26 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.223 10:35:26 version -- scripts/common.sh@366 -- # decimal 2 00:08:00.223 10:35:26 version -- scripts/common.sh@353 -- # local d=2 00:08:00.223 10:35:26 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.223 10:35:26 version -- scripts/common.sh@355 -- # echo 2 00:08:00.482 10:35:26 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.482 10:35:26 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.482 10:35:26 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.482 10:35:26 version -- scripts/common.sh@368 -- # return 0 00:08:00.482 10:35:26 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.482 10:35:26 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:00.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.482 --rc genhtml_branch_coverage=1 00:08:00.482 --rc genhtml_function_coverage=1 00:08:00.482 --rc genhtml_legend=1 00:08:00.482 --rc geninfo_all_blocks=1 00:08:00.482 --rc geninfo_unexecuted_blocks=1 00:08:00.482 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:00.482 ' 00:08:00.482 10:35:26 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:00.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.482 --rc genhtml_branch_coverage=1 00:08:00.482 --rc genhtml_function_coverage=1 00:08:00.482 --rc genhtml_legend=1 00:08:00.482 --rc geninfo_all_blocks=1 00:08:00.482 --rc geninfo_unexecuted_blocks=1 00:08:00.482 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:00.482 ' 00:08:00.482 10:35:26 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:00.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.482 --rc genhtml_branch_coverage=1 00:08:00.482 --rc genhtml_function_coverage=1 00:08:00.482 --rc genhtml_legend=1 00:08:00.482 --rc geninfo_all_blocks=1 00:08:00.482 --rc geninfo_unexecuted_blocks=1 00:08:00.482 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:00.482 ' 00:08:00.482 10:35:26 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:00.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.482 --rc genhtml_branch_coverage=1 00:08:00.482 --rc genhtml_function_coverage=1 00:08:00.482 --rc genhtml_legend=1 00:08:00.482 --rc geninfo_all_blocks=1 00:08:00.482 --rc geninfo_unexecuted_blocks=1 00:08:00.482 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:00.482 ' 00:08:00.482 10:35:26 version -- app/version.sh@17 -- # get_header_version major 00:08:00.482 10:35:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:00.482 10:35:26 version -- app/version.sh@14 -- # cut -f2 00:08:00.482 10:35:26 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.482 10:35:26 version -- app/version.sh@17 -- # major=25 00:08:00.482 10:35:26 version -- app/version.sh@18 -- # get_header_version minor 00:08:00.482 10:35:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:00.482 10:35:26 version -- app/version.sh@14 -- # cut -f2 00:08:00.482 10:35:26 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.482 10:35:26 version -- app/version.sh@18 -- # minor=1 00:08:00.482 10:35:26 version -- app/version.sh@19 -- # get_header_version patch 00:08:00.482 10:35:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:00.482 10:35:26 version -- app/version.sh@14 -- # cut -f2 00:08:00.482 10:35:26 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.482 10:35:26 version -- app/version.sh@19 -- # patch=0 00:08:00.482 10:35:26 version -- app/version.sh@20 -- # get_header_version suffix 00:08:00.482 10:35:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:00.482 10:35:26 version -- app/version.sh@14 -- # cut -f2 00:08:00.482 10:35:26 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.482 10:35:26 version -- app/version.sh@20 -- # suffix=-pre 00:08:00.482 10:35:26 version -- app/version.sh@22 -- # version=25.1 00:08:00.482 10:35:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:00.482 10:35:26 version -- app/version.sh@28 -- # version=25.1rc0 00:08:00.482 10:35:26 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:00.482 10:35:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:00.482 10:35:26 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:00.482 10:35:26 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:00.482 00:08:00.482 real 0m0.244s 00:08:00.482 user 0m0.123s 00:08:00.482 sys 0m0.170s 00:08:00.482 10:35:26 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:00.482 10:35:26 version -- common/autotest_common.sh@10 -- # set +x 00:08:00.482 ************************************ 00:08:00.482 END TEST version 00:08:00.482 ************************************ 00:08:00.482 10:35:26 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:00.482 10:35:26 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:00.482 10:35:26 -- spdk/autotest.sh@194 -- # uname -s 00:08:00.482 10:35:26 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:00.482 10:35:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:00.482 10:35:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:00.482 10:35:26 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:00.482 10:35:26 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:00.482 10:35:26 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:00.482 10:35:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:00.482 10:35:26 -- common/autotest_common.sh@10 -- # set +x 00:08:00.482 10:35:26 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:00.482 10:35:26 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:08:00.482 10:35:26 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:08:00.482 10:35:26 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:08:00.482 10:35:26 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:08:00.482 10:35:26 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:08:00.482 10:35:26 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:08:00.482 10:35:26 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:08:00.482 10:35:26 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:08:00.482 10:35:26 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:08:00.482 10:35:26 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:08:00.482 10:35:26 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:08:00.482 10:35:26 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:08:00.482 10:35:26 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:08:00.482 10:35:26 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:08:00.482 10:35:26 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:08:00.482 10:35:26 -- spdk/autotest.sh@370 -- # [[ 1 -eq 1 ]] 00:08:00.482 10:35:26 -- spdk/autotest.sh@371 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:08:00.482 10:35:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:00.482 10:35:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:00.482 10:35:26 -- common/autotest_common.sh@10 -- # set +x 00:08:00.482 ************************************ 00:08:00.482 START TEST llvm_fuzz 00:08:00.482 ************************************ 00:08:00.483 10:35:26 llvm_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:08:00.741 * Looking for test storage... 00:08:00.741 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:08:00.741 10:35:26 llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:00.741 10:35:26 llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:08:00.741 10:35:26 llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:00.741 10:35:26 llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.741 10:35:26 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:08:00.741 10:35:26 llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.741 10:35:26 llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:00.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.741 --rc genhtml_branch_coverage=1 00:08:00.741 --rc genhtml_function_coverage=1 00:08:00.741 --rc genhtml_legend=1 00:08:00.741 --rc geninfo_all_blocks=1 00:08:00.741 --rc geninfo_unexecuted_blocks=1 00:08:00.741 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:00.741 ' 00:08:00.741 10:35:26 llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:00.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.741 --rc genhtml_branch_coverage=1 00:08:00.741 --rc genhtml_function_coverage=1 00:08:00.741 --rc genhtml_legend=1 00:08:00.741 --rc geninfo_all_blocks=1 00:08:00.741 --rc geninfo_unexecuted_blocks=1 00:08:00.741 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:00.741 ' 00:08:00.741 10:35:26 llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:00.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.742 --rc genhtml_branch_coverage=1 00:08:00.742 --rc genhtml_function_coverage=1 00:08:00.742 --rc genhtml_legend=1 00:08:00.742 --rc geninfo_all_blocks=1 00:08:00.742 --rc geninfo_unexecuted_blocks=1 00:08:00.742 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:00.742 ' 00:08:00.742 10:35:26 llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:00.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.742 --rc genhtml_branch_coverage=1 00:08:00.742 --rc genhtml_function_coverage=1 00:08:00.742 --rc genhtml_legend=1 00:08:00.742 --rc geninfo_all_blocks=1 00:08:00.742 --rc geninfo_unexecuted_blocks=1 00:08:00.742 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:00.742 ' 00:08:00.742 10:35:26 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:08:00.742 10:35:26 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:08:00.742 10:35:26 llvm_fuzz -- common/autotest_common.sh@548 -- # fuzzers=() 00:08:00.742 10:35:26 llvm_fuzz -- common/autotest_common.sh@548 -- # local fuzzers 00:08:00.742 10:35:26 llvm_fuzz -- common/autotest_common.sh@550 -- # [[ -n '' ]] 00:08:00.742 10:35:26 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:08:00.742 10:35:26 llvm_fuzz -- common/autotest_common.sh@554 -- # fuzzers=("${fuzzers[@]##*/}") 00:08:00.742 10:35:26 llvm_fuzz -- common/autotest_common.sh@557 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:08:00.742 10:35:26 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:08:00.742 10:35:26 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:08:00.742 10:35:26 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:08:00.742 10:35:26 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:08:00.742 10:35:26 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:08:00.742 10:35:26 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:08:00.742 10:35:26 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:08:00.742 10:35:26 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:08:00.742 10:35:26 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:08:00.742 10:35:26 llvm_fuzz -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:00.742 10:35:26 llvm_fuzz -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:00.742 10:35:26 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:00.742 ************************************ 00:08:00.742 START TEST nvmf_llvm_fuzz 00:08:00.742 ************************************ 00:08:00.742 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:08:01.003 * Looking for test storage... 00:08:01.003 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:01.003 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:01.003 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:08:01.003 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:01.003 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:01.003 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.003 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.003 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.003 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.003 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.003 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.003 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.003 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:01.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.004 --rc genhtml_branch_coverage=1 00:08:01.004 --rc genhtml_function_coverage=1 00:08:01.004 --rc genhtml_legend=1 00:08:01.004 --rc geninfo_all_blocks=1 00:08:01.004 --rc geninfo_unexecuted_blocks=1 00:08:01.004 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:01.004 ' 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:01.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.004 --rc genhtml_branch_coverage=1 00:08:01.004 --rc genhtml_function_coverage=1 00:08:01.004 --rc genhtml_legend=1 00:08:01.004 --rc geninfo_all_blocks=1 00:08:01.004 --rc geninfo_unexecuted_blocks=1 00:08:01.004 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:01.004 ' 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:01.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.004 --rc genhtml_branch_coverage=1 00:08:01.004 --rc genhtml_function_coverage=1 00:08:01.004 --rc genhtml_legend=1 00:08:01.004 --rc geninfo_all_blocks=1 00:08:01.004 --rc geninfo_unexecuted_blocks=1 00:08:01.004 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:01.004 ' 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:01.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.004 --rc genhtml_branch_coverage=1 00:08:01.004 --rc genhtml_function_coverage=1 00:08:01.004 --rc genhtml_legend=1 00:08:01.004 --rc geninfo_all_blocks=1 00:08:01.004 --rc geninfo_unexecuted_blocks=1 00:08:01.004 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:01.004 ' 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:08:01.004 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:01.005 #define SPDK_CONFIG_H 00:08:01.005 #define SPDK_CONFIG_AIO_FSDEV 1 00:08:01.005 #define SPDK_CONFIG_APPS 1 00:08:01.005 #define SPDK_CONFIG_ARCH native 00:08:01.005 #undef SPDK_CONFIG_ASAN 00:08:01.005 #undef SPDK_CONFIG_AVAHI 00:08:01.005 #undef SPDK_CONFIG_CET 00:08:01.005 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:08:01.005 #define SPDK_CONFIG_COVERAGE 1 00:08:01.005 #define SPDK_CONFIG_CROSS_PREFIX 00:08:01.005 #undef SPDK_CONFIG_CRYPTO 00:08:01.005 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:01.005 #undef SPDK_CONFIG_CUSTOMOCF 00:08:01.005 #undef SPDK_CONFIG_DAOS 00:08:01.005 #define SPDK_CONFIG_DAOS_DIR 00:08:01.005 #define SPDK_CONFIG_DEBUG 1 00:08:01.005 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:01.005 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:01.005 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:01.005 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:01.005 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:01.005 #undef SPDK_CONFIG_DPDK_UADK 00:08:01.005 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:01.005 #define SPDK_CONFIG_EXAMPLES 1 00:08:01.005 #undef SPDK_CONFIG_FC 00:08:01.005 #define SPDK_CONFIG_FC_PATH 00:08:01.005 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:01.005 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:01.005 #define SPDK_CONFIG_FSDEV 1 00:08:01.005 #undef SPDK_CONFIG_FUSE 00:08:01.005 #define SPDK_CONFIG_FUZZER 1 00:08:01.005 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:01.005 #undef SPDK_CONFIG_GOLANG 00:08:01.005 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:01.005 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:01.005 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:01.005 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:01.005 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:01.005 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:01.005 #undef SPDK_CONFIG_HAVE_LZ4 00:08:01.005 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:08:01.005 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:08:01.005 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:01.005 #define SPDK_CONFIG_IDXD 1 00:08:01.005 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:01.005 #undef SPDK_CONFIG_IPSEC_MB 00:08:01.005 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:01.005 #define SPDK_CONFIG_ISAL 1 00:08:01.005 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:01.005 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:01.005 #define SPDK_CONFIG_LIBDIR 00:08:01.005 #undef SPDK_CONFIG_LTO 00:08:01.005 #define SPDK_CONFIG_MAX_LCORES 128 00:08:01.005 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:08:01.005 #define SPDK_CONFIG_NVME_CUSE 1 00:08:01.005 #undef SPDK_CONFIG_OCF 00:08:01.005 #define SPDK_CONFIG_OCF_PATH 00:08:01.005 #define SPDK_CONFIG_OPENSSL_PATH 00:08:01.005 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:01.005 #define SPDK_CONFIG_PGO_DIR 00:08:01.005 #undef SPDK_CONFIG_PGO_USE 00:08:01.005 #define SPDK_CONFIG_PREFIX /usr/local 00:08:01.005 #undef SPDK_CONFIG_RAID5F 00:08:01.005 #undef SPDK_CONFIG_RBD 00:08:01.005 #define SPDK_CONFIG_RDMA 1 00:08:01.005 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:01.005 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:01.005 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:01.005 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:01.005 #undef SPDK_CONFIG_SHARED 00:08:01.005 #undef SPDK_CONFIG_SMA 00:08:01.005 #define SPDK_CONFIG_TESTS 1 00:08:01.005 #undef SPDK_CONFIG_TSAN 00:08:01.005 #define SPDK_CONFIG_UBLK 1 00:08:01.005 #define SPDK_CONFIG_UBSAN 1 00:08:01.005 #undef SPDK_CONFIG_UNIT_TESTS 00:08:01.005 #undef SPDK_CONFIG_URING 00:08:01.005 #define SPDK_CONFIG_URING_PATH 00:08:01.005 #undef SPDK_CONFIG_URING_ZNS 00:08:01.005 #undef SPDK_CONFIG_USDT 00:08:01.005 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:01.005 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:01.005 #define SPDK_CONFIG_VFIO_USER 1 00:08:01.005 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:01.005 #define SPDK_CONFIG_VHOST 1 00:08:01.005 #define SPDK_CONFIG_VIRTIO 1 00:08:01.005 #undef SPDK_CONFIG_VTUNE 00:08:01.005 #define SPDK_CONFIG_VTUNE_DIR 00:08:01.005 #define SPDK_CONFIG_WERROR 1 00:08:01.005 #define SPDK_CONFIG_WPDK_DIR 00:08:01.005 #undef SPDK_CONFIG_XNVME 00:08:01.005 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.005 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.006 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.006 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:08:01.006 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.006 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:01.006 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:01.006 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:01.006 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:01.006 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:01.006 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:01.006 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:01.006 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:01.006 10:35:26 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:01.006 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:08:01.007 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 2860267 ]] 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 2860267 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.fO718b 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.fO718b/tests/nvmf /tmp/spdk.fO718b 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=81385893888 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=94500290560 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=13114396672 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47245381632 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250145280 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4763648 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=18893955072 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=18900058112 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=6103040 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=46175846400 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250145280 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=1074298880 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=9450016768 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=9450029056 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:08:01.008 * Looking for test storage... 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=81385893888 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=15328989184 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:01.008 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1683 -- # true 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:01.008 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:01.009 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:01.009 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:01.009 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:01.009 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:01.009 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:01.009 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.267 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:01.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.268 --rc genhtml_branch_coverage=1 00:08:01.268 --rc genhtml_function_coverage=1 00:08:01.268 --rc genhtml_legend=1 00:08:01.268 --rc geninfo_all_blocks=1 00:08:01.268 --rc geninfo_unexecuted_blocks=1 00:08:01.268 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:01.268 ' 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:01.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.268 --rc genhtml_branch_coverage=1 00:08:01.268 --rc genhtml_function_coverage=1 00:08:01.268 --rc genhtml_legend=1 00:08:01.268 --rc geninfo_all_blocks=1 00:08:01.268 --rc geninfo_unexecuted_blocks=1 00:08:01.268 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:01.268 ' 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:01.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.268 --rc genhtml_branch_coverage=1 00:08:01.268 --rc genhtml_function_coverage=1 00:08:01.268 --rc genhtml_legend=1 00:08:01.268 --rc geninfo_all_blocks=1 00:08:01.268 --rc geninfo_unexecuted_blocks=1 00:08:01.268 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:01.268 ' 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:01.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.268 --rc genhtml_branch_coverage=1 00:08:01.268 --rc genhtml_function_coverage=1 00:08:01.268 --rc genhtml_legend=1 00:08:01.268 --rc geninfo_all_blocks=1 00:08:01.268 --rc geninfo_unexecuted_blocks=1 00:08:01.268 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:01.268 ' 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:01.268 10:35:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:08:01.268 [2024-11-05 10:35:27.206881] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:01.268 [2024-11-05 10:35:27.206970] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2860323 ] 00:08:01.526 [2024-11-05 10:35:27.478618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.526 [2024-11-05 10:35:27.526467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.526 [2024-11-05 10:35:27.590643] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.784 [2024-11-05 10:35:27.606890] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:08:01.784 INFO: Running with entropic power schedule (0xFF, 100). 00:08:01.784 INFO: Seed: 2087834279 00:08:01.785 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:01.785 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:01.785 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:01.785 INFO: A corpus is not provided, starting from an empty corpus 00:08:01.785 #2 INITED exec/s: 0 rss: 66Mb 00:08:01.785 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:01.785 This may also happen if the target rejected all inputs we tried so far 00:08:01.785 [2024-11-05 10:35:27.678009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:17171717 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:01.785 [2024-11-05 10:35:27.678055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.043 NEW_FUNC[1/716]: 0x43bbc8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:08:02.043 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:02.043 #12 NEW cov: 12206 ft: 12205 corp: 2/94b lim: 320 exec/s: 0 rss: 73Mb L: 93/93 MS: 5 CopyPart-ChangeBit-InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:08:02.043 [2024-11-05 10:35:28.038552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:17175d00 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:02.043 [2024-11-05 10:35:28.038606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.043 #13 NEW cov: 12319 ft: 12959 corp: 3/187b lim: 320 exec/s: 0 rss: 73Mb L: 93/93 MS: 1 ChangeBinInt- 00:08:02.043 [2024-11-05 10:35:28.108825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:17175d00 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:02.043 [2024-11-05 10:35:28.108852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.301 #14 NEW cov: 12325 ft: 13148 corp: 4/280b lim: 320 exec/s: 0 rss: 73Mb L: 93/93 MS: 1 ChangeByte- 00:08:02.301 [2024-11-05 10:35:28.179093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:17171717 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:02.301 [2024-11-05 10:35:28.179120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.301 #15 NEW cov: 12410 ft: 13352 corp: 5/373b lim: 320 exec/s: 0 rss: 73Mb L: 93/93 MS: 1 ChangeBinInt- 00:08:02.301 [2024-11-05 10:35:28.229286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:17171717 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:02.301 [2024-11-05 10:35:28.229314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.301 #16 NEW cov: 12410 ft: 13469 corp: 6/466b lim: 320 exec/s: 0 rss: 73Mb L: 93/93 MS: 1 ShuffleBytes- 00:08:02.301 [2024-11-05 10:35:28.279461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:40000 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:02.301 [2024-11-05 10:35:28.279487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.301 #17 NEW cov: 12410 ft: 13502 corp: 7/563b lim: 320 exec/s: 0 rss: 73Mb L: 97/97 MS: 1 CMP- DE: "\001\000\000\004"- 00:08:02.301 [2024-11-05 10:35:28.349664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:17171717 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:02.301 [2024-11-05 10:35:28.349691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.559 #18 NEW cov: 12410 ft: 13566 corp: 8/656b lim: 320 exec/s: 0 rss: 73Mb L: 93/97 MS: 1 ChangeBinInt- 00:08:02.559 [2024-11-05 10:35:28.420135] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:bebebebe SGL TRANSPORT DATA BLOCK TRANSPORT 0xbebebebebebebebe 00:08:02.559 [2024-11-05 10:35:28.420167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.559 #25 NEW cov: 12436 ft: 13741 corp: 9/779b lim: 320 exec/s: 0 rss: 73Mb L: 123/123 MS: 2 PersAutoDict-InsertRepeatedBytes- DE: "\001\000\000\004"- 00:08:02.559 [2024-11-05 10:35:28.470178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:17175d00 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:02.559 [2024-11-05 10:35:28.470206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.559 #26 NEW cov: 12436 ft: 13765 corp: 10/850b lim: 320 exec/s: 0 rss: 73Mb L: 71/123 MS: 1 EraseBytes- 00:08:02.559 [2024-11-05 10:35:28.520575] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:bebebebe SGL TRANSPORT DATA BLOCK TRANSPORT 0xbebebebebebebebe 00:08:02.559 [2024-11-05 10:35:28.520604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.559 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:02.559 #27 NEW cov: 12459 ft: 13936 corp: 11/928b lim: 320 exec/s: 0 rss: 73Mb L: 78/123 MS: 1 EraseBytes- 00:08:02.559 [2024-11-05 10:35:28.590785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:17175d00 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:02.559 [2024-11-05 10:35:28.590814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.559 #33 NEW cov: 12459 ft: 13991 corp: 12/1021b lim: 320 exec/s: 0 rss: 73Mb L: 93/123 MS: 1 CopyPart- 00:08:02.817 [2024-11-05 10:35:28.640988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:17175d00 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717175d00 00:08:02.817 [2024-11-05 10:35:28.641015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.817 #34 NEW cov: 12459 ft: 14025 corp: 13/1124b lim: 320 exec/s: 34 rss: 74Mb L: 103/123 MS: 1 CopyPart- 00:08:02.817 [2024-11-05 10:35:28.711271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:17175d00 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:02.817 [2024-11-05 10:35:28.711299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.817 #35 NEW cov: 12459 ft: 14059 corp: 14/1195b lim: 320 exec/s: 35 rss: 74Mb L: 71/123 MS: 1 ChangeBit- 00:08:02.817 [2024-11-05 10:35:28.761423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:17171717 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:02.817 [2024-11-05 10:35:28.761452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.817 #36 NEW cov: 12459 ft: 14092 corp: 15/1288b lim: 320 exec/s: 36 rss: 74Mb L: 93/123 MS: 1 ShuffleBytes- 00:08:02.817 [2024-11-05 10:35:28.832004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3e) qid:0 cid:4 nsid:38383838 cdw10:38383838 cdw11:38383838 SGL TRANSPORT DATA BLOCK TRANSPORT 0x3838383838383838 00:08:02.817 [2024-11-05 10:35:28.832031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.817 #38 NEW cov: 12469 ft: 14127 corp: 16/1370b lim: 320 exec/s: 38 rss: 74Mb L: 82/123 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:02.817 [2024-11-05 10:35:28.882169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:17171717 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:02.817 [2024-11-05 10:35:28.882196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.074 #39 NEW cov: 12469 ft: 14170 corp: 17/1452b lim: 320 exec/s: 39 rss: 74Mb L: 82/123 MS: 1 EraseBytes- 00:08:03.074 [2024-11-05 10:35:28.932508] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:bebebebe SGL TRANSPORT DATA BLOCK TRANSPORT 0xbebebebebebebebe 00:08:03.074 [2024-11-05 10:35:28.932538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.075 #40 NEW cov: 12469 ft: 14234 corp: 18/1530b lim: 320 exec/s: 40 rss: 74Mb L: 78/123 MS: 1 ChangeByte- 00:08:03.075 [2024-11-05 10:35:29.002777] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:bebebebe SGL TRANSPORT DATA BLOCK TRANSPORT 0xbebebebebebebebe 00:08:03.075 [2024-11-05 10:35:29.002807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.075 #41 NEW cov: 12469 ft: 14284 corp: 19/1609b lim: 320 exec/s: 41 rss: 74Mb L: 79/123 MS: 1 InsertByte- 00:08:03.075 [2024-11-05 10:35:29.052901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:17175d00 cdw10:17171717 cdw11:17401717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:03.075 [2024-11-05 10:35:29.052932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.075 #42 NEW cov: 12469 ft: 14291 corp: 20/1702b lim: 320 exec/s: 42 rss: 74Mb L: 93/123 MS: 1 ChangeByte- 00:08:03.075 [2024-11-05 10:35:29.103215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:17171717 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:03.075 [2024-11-05 10:35:29.103245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.075 #43 NEW cov: 12469 ft: 14298 corp: 21/1821b lim: 320 exec/s: 43 rss: 74Mb L: 119/123 MS: 1 InsertRepeatedBytes- 00:08:03.075 [2024-11-05 10:35:29.153400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:17175d00 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:03.075 [2024-11-05 10:35:29.153431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.332 #44 NEW cov: 12469 ft: 14312 corp: 22/1892b lim: 320 exec/s: 44 rss: 74Mb L: 71/123 MS: 1 ShuffleBytes- 00:08:03.332 [2024-11-05 10:35:29.233992] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717bebebe 00:08:03.332 [2024-11-05 10:35:29.234025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.332 [2024-11-05 10:35:29.234128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:5 nsid:17171717 cdw10:1717e817 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.332 [2024-11-05 10:35:29.234146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.332 #46 NEW cov: 12469 ft: 14485 corp: 23/2062b lim: 320 exec/s: 46 rss: 74Mb L: 170/170 MS: 2 EraseBytes-CrossOver- 00:08:03.332 [2024-11-05 10:35:29.304145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (41) qid:0 cid:4 nsid:bebebebe cdw10:bebebebe cdw11:bebebebe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.333 [2024-11-05 10:35:29.304174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.333 NEW_FUNC[1/1]: 0x1963418 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:08:03.333 #47 NEW cov: 12482 ft: 14808 corp: 24/2140b lim: 320 exec/s: 47 rss: 74Mb L: 78/170 MS: 1 ChangeByte- 00:08:03.333 [2024-11-05 10:35:29.354204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:17175d00 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:03.333 [2024-11-05 10:35:29.354232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.333 #48 NEW cov: 12482 ft: 14860 corp: 25/2211b lim: 320 exec/s: 48 rss: 74Mb L: 71/170 MS: 1 ChangeBit- 00:08:03.590 [2024-11-05 10:35:29.424820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:17171717 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:03.590 [2024-11-05 10:35:29.424847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.590 #49 NEW cov: 12482 ft: 14928 corp: 26/2304b lim: 320 exec/s: 49 rss: 74Mb L: 93/170 MS: 1 ChangeBinInt- 00:08:03.590 [2024-11-05 10:35:29.475067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:17175d00 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:03.590 [2024-11-05 10:35:29.475094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.590 #50 NEW cov: 12482 ft: 14932 corp: 27/2397b lim: 320 exec/s: 50 rss: 74Mb L: 93/170 MS: 1 ChangeByte- 00:08:03.590 [2024-11-05 10:35:29.525288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:17171717 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:03.590 [2024-11-05 10:35:29.525314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.590 #51 NEW cov: 12482 ft: 14954 corp: 28/2479b lim: 320 exec/s: 51 rss: 74Mb L: 82/170 MS: 1 ChangeByte- 00:08:03.590 [2024-11-05 10:35:29.595549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:17175d00 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:03.590 [2024-11-05 10:35:29.595577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.590 #52 NEW cov: 12482 ft: 14961 corp: 29/2550b lim: 320 exec/s: 52 rss: 74Mb L: 71/170 MS: 1 ChangeBit- 00:08:03.590 [2024-11-05 10:35:29.665843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (17) qid:0 cid:4 nsid:177e1717 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1717171717171717 00:08:03.590 [2024-11-05 10:35:29.665870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.848 #53 NEW cov: 12482 ft: 14963 corp: 30/2644b lim: 320 exec/s: 26 rss: 74Mb L: 94/170 MS: 1 InsertByte- 00:08:03.848 #53 DONE cov: 12482 ft: 14963 corp: 30/2644b lim: 320 exec/s: 26 rss: 74Mb 00:08:03.848 ###### Recommended dictionary. ###### 00:08:03.848 "\001\000\000\004" # Uses: 3 00:08:03.848 ###### End of recommended dictionary. ###### 00:08:03.848 Done 53 runs in 2 second(s) 00:08:03.848 10:35:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:08:03.848 10:35:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:03.848 10:35:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:03.848 10:35:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:03.848 10:35:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:08:03.848 10:35:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:03.848 10:35:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:03.848 10:35:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:03.848 10:35:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:08:03.848 10:35:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:03.848 10:35:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:03.848 10:35:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:08:03.848 10:35:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:08:03.848 10:35:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:03.848 10:35:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:08:03.848 10:35:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:03.848 10:35:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:03.848 10:35:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:03.848 10:35:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:08:03.848 [2024-11-05 10:35:29.828646] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:03.848 [2024-11-05 10:35:29.828702] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2860685 ] 00:08:04.106 [2024-11-05 10:35:30.072610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.106 [2024-11-05 10:35:30.122928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.364 [2024-11-05 10:35:30.186797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.364 [2024-11-05 10:35:30.203033] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:08:04.364 INFO: Running with entropic power schedule (0xFF, 100). 00:08:04.364 INFO: Seed: 389898622 00:08:04.364 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:04.364 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:04.364 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:04.364 INFO: A corpus is not provided, starting from an empty corpus 00:08:04.364 #2 INITED exec/s: 0 rss: 66Mb 00:08:04.364 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:04.364 This may also happen if the target rejected all inputs we tried so far 00:08:04.364 [2024-11-05 10:35:30.273962] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa 00:08:04.364 [2024-11-05 10:35:30.274521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5d550001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.364 [2024-11-05 10:35:30.274575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.622 NEW_FUNC[1/716]: 0x43c4c8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:08:04.622 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:04.622 #17 NEW cov: 12272 ft: 12221 corp: 2/8b lim: 30 exec/s: 0 rss: 73Mb L: 7/7 MS: 5 ShuffleBytes-InsertByte-CopyPart-ChangeBit-CMP- DE: "\001\000\000\012"- 00:08:04.622 [2024-11-05 10:35:30.674769] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004f4f 00:08:04.622 [2024-11-05 10:35:30.675275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4f4f834f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.622 [2024-11-05 10:35:30.675326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.880 #19 NEW cov: 12385 ft: 12739 corp: 3/16b lim: 30 exec/s: 0 rss: 73Mb L: 8/8 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:04.880 [2024-11-05 10:35:30.735297] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10356) > buf size (4096) 00:08:04.880 [2024-11-05 10:35:30.735577] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (28788) > buf size (4096) 00:08:04.880 [2024-11-05 10:35:30.735836] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (28788) > buf size (4096) 00:08:04.880 [2024-11-05 10:35:30.736314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a1c001c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.880 [2024-11-05 10:35:30.736354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.880 [2024-11-05 10:35:30.736457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:1c1c001c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.880 [2024-11-05 10:35:30.736479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.880 [2024-11-05 10:35:30.736570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:1c1c001c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.880 [2024-11-05 10:35:30.736593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.880 #21 NEW cov: 12414 ft: 13487 corp: 4/37b lim: 30 exec/s: 0 rss: 73Mb L: 21/21 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:04.880 [2024-11-05 10:35:30.805658] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:08:04.880 [2024-11-05 10:35:30.805947] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:08:04.880 [2024-11-05 10:35:30.806447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5d550001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.880 [2024-11-05 10:35:30.806486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.880 [2024-11-05 10:35:30.806591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.880 [2024-11-05 10:35:30.806614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.880 #22 NEW cov: 12499 ft: 13994 corp: 5/51b lim: 30 exec/s: 0 rss: 73Mb L: 14/21 MS: 1 InsertRepeatedBytes- 00:08:04.880 [2024-11-05 10:35:30.906349] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x15 00:08:04.880 [2024-11-05 10:35:30.906626] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (28788) > buf size (4096) 00:08:04.880 [2024-11-05 10:35:30.906912] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (28788) > buf size (4096) 00:08:04.880 [2024-11-05 10:35:30.907388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a1c001c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.880 [2024-11-05 10:35:30.907428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.880 [2024-11-05 10:35:30.907524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:1c1c001c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.880 [2024-11-05 10:35:30.907547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.880 [2024-11-05 10:35:30.907640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:1c1c001c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.880 [2024-11-05 10:35:30.907664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.138 #23 NEW cov: 12499 ft: 14069 corp: 6/72b lim: 30 exec/s: 0 rss: 73Mb L: 21/21 MS: 1 ChangeBinInt- 00:08:05.138 [2024-11-05 10:35:30.996878] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004f4f 00:08:05.138 [2024-11-05 10:35:30.997163] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa0a 00:08:05.138 [2024-11-05 10:35:30.997644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4f4f834f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.138 [2024-11-05 10:35:30.997687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.138 [2024-11-05 10:35:30.997782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4f010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.138 [2024-11-05 10:35:30.997804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.138 #24 NEW cov: 12499 ft: 14195 corp: 7/84b lim: 30 exec/s: 0 rss: 73Mb L: 12/21 MS: 1 PersAutoDict- DE: "\001\000\000\012"- 00:08:05.138 [2024-11-05 10:35:31.087530] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10356) > buf size (4096) 00:08:05.138 [2024-11-05 10:35:31.087824] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (28788) > buf size (4096) 00:08:05.138 [2024-11-05 10:35:31.088095] ctrlr.c:2698:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (7196) > len (4) 00:08:05.138 [2024-11-05 10:35:31.088373] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (28788) > buf size (4096) 00:08:05.138 [2024-11-05 10:35:31.088906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a1c001c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.138 [2024-11-05 10:35:31.088945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.139 [2024-11-05 10:35:31.089050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:1c1c001c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.139 [2024-11-05 10:35:31.089073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.139 [2024-11-05 10:35:31.089175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0000001c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.139 [2024-11-05 10:35:31.089196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.139 [2024-11-05 10:35:31.089293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:1c1c001c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.139 [2024-11-05 10:35:31.089315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.139 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:05.139 #25 NEW cov: 12535 ft: 14785 corp: 8/109b lim: 30 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:08:05.139 [2024-11-05 10:35:31.157805] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004f4f 00:08:05.139 [2024-11-05 10:35:31.158292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4f4f834f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.139 [2024-11-05 10:35:31.158330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.139 #26 NEW cov: 12535 ft: 14833 corp: 9/117b lim: 30 exec/s: 0 rss: 73Mb L: 8/25 MS: 1 ChangeBinInt- 00:08:05.397 [2024-11-05 10:35:31.218331] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004f4f 00:08:05.397 [2024-11-05 10:35:31.218621] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (525316) > buf size (4096) 00:08:05.397 [2024-11-05 10:35:31.219143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4f4f834f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.397 [2024-11-05 10:35:31.219181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.397 [2024-11-05 10:35:31.219275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:01000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.397 [2024-11-05 10:35:31.219302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.397 #27 NEW cov: 12535 ft: 14876 corp: 10/129b lim: 30 exec/s: 27 rss: 73Mb L: 12/25 MS: 1 PersAutoDict- DE: "\001\000\000\012"- 00:08:05.397 [2024-11-05 10:35:31.308827] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004f4f 00:08:05.397 [2024-11-05 10:35:31.309308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4f4f834f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.397 [2024-11-05 10:35:31.309347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.397 #28 NEW cov: 12535 ft: 14946 corp: 11/136b lim: 30 exec/s: 28 rss: 73Mb L: 7/25 MS: 1 EraseBytes- 00:08:05.397 [2024-11-05 10:35:31.369395] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (343360) > buf size (4096) 00:08:05.397 [2024-11-05 10:35:31.369685] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004f0a 00:08:05.397 [2024-11-05 10:35:31.370142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4f4f814f cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.397 [2024-11-05 10:35:31.370180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.397 [2024-11-05 10:35:31.370272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0a4f834f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.397 [2024-11-05 10:35:31.370295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.397 #29 NEW cov: 12535 ft: 14959 corp: 12/148b lim: 30 exec/s: 29 rss: 73Mb L: 12/25 MS: 1 PersAutoDict- DE: "\001\000\000\012"- 00:08:05.397 [2024-11-05 10:35:31.429738] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004f4f 00:08:05.397 [2024-11-05 10:35:31.430231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4f4f814f cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.397 [2024-11-05 10:35:31.430269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.397 #30 NEW cov: 12535 ft: 14994 corp: 13/156b lim: 30 exec/s: 30 rss: 73Mb L: 8/25 MS: 1 ChangeBit- 00:08:05.655 [2024-11-05 10:35:31.490253] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004f4f 00:08:05.655 [2024-11-05 10:35:31.490774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4f4f834f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.655 [2024-11-05 10:35:31.490812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.655 #31 NEW cov: 12535 ft: 15004 corp: 14/164b lim: 30 exec/s: 31 rss: 73Mb L: 8/25 MS: 1 ShuffleBytes- 00:08:05.655 [2024-11-05 10:35:31.550858] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004f4f 00:08:05.655 [2024-11-05 10:35:31.551153] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (80940) > buf size (4096) 00:08:05.655 [2024-11-05 10:35:31.551428] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262148) > buf size (4096) 00:08:05.655 [2024-11-05 10:35:31.551918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4f4f834f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.655 [2024-11-05 10:35:31.551956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.655 [2024-11-05 10:35:31.552060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4f0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.655 [2024-11-05 10:35:31.552084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.655 [2024-11-05 10:35:31.552184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.655 [2024-11-05 10:35:31.552207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.655 #32 NEW cov: 12535 ft: 15037 corp: 15/184b lim: 30 exec/s: 32 rss: 74Mb L: 20/25 MS: 1 CMP- DE: "\012\000\000\000\000\000\000\000"- 00:08:05.655 [2024-11-05 10:35:31.641386] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (81216) > buf size (4096) 00:08:05.655 [2024-11-05 10:35:31.641685] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (80940) > buf size (4096) 00:08:05.655 [2024-11-05 10:35:31.641975] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262148) > buf size (4096) 00:08:05.655 [2024-11-05 10:35:31.642486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4f4f0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.655 [2024-11-05 10:35:31.642525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.655 [2024-11-05 10:35:31.642625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4f0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.655 [2024-11-05 10:35:31.642649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.655 [2024-11-05 10:35:31.642746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.655 [2024-11-05 10:35:31.642768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.655 #33 NEW cov: 12535 ft: 15055 corp: 16/204b lim: 30 exec/s: 33 rss: 74Mb L: 20/25 MS: 1 ChangeBinInt- 00:08:05.913 [2024-11-05 10:35:31.741627] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (81216) > buf size (4096) 00:08:05.913 [2024-11-05 10:35:31.741933] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (80940) > buf size (4096) 00:08:05.913 [2024-11-05 10:35:31.742436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4f4f0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.913 [2024-11-05 10:35:31.742477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.913 [2024-11-05 10:35:31.742577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4f0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.913 [2024-11-05 10:35:31.742600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.913 #34 NEW cov: 12535 ft: 15085 corp: 17/221b lim: 30 exec/s: 34 rss: 74Mb L: 17/25 MS: 1 EraseBytes- 00:08:05.913 [2024-11-05 10:35:31.832256] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (81216) > buf size (4096) 00:08:05.913 [2024-11-05 10:35:31.832752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4f4f0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.913 [2024-11-05 10:35:31.832794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.913 #35 NEW cov: 12535 ft: 15121 corp: 18/232b lim: 30 exec/s: 35 rss: 74Mb L: 11/25 MS: 1 EraseBytes- 00:08:05.913 [2024-11-05 10:35:31.922846] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (867648) > buf size (4096) 00:08:05.913 [2024-11-05 10:35:31.923123] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x4f4f 00:08:05.913 [2024-11-05 10:35:31.923604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4f4f834f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.913 [2024-11-05 10:35:31.923646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.913 [2024-11-05 10:35:31.923753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.913 [2024-11-05 10:35:31.923777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.913 #36 NEW cov: 12535 ft: 15123 corp: 19/245b lim: 30 exec/s: 36 rss: 74Mb L: 13/25 MS: 1 InsertRepeatedBytes- 00:08:06.171 [2024-11-05 10:35:32.013268] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (81216) > buf size (4096) 00:08:06.171 [2024-11-05 10:35:32.013555] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004f0a 00:08:06.171 [2024-11-05 10:35:32.014028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4f4f004e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.171 [2024-11-05 10:35:32.014068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.171 [2024-11-05 10:35:32.014170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0a4f834f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.172 [2024-11-05 10:35:32.014194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.172 #37 NEW cov: 12535 ft: 15130 corp: 20/257b lim: 30 exec/s: 37 rss: 74Mb L: 12/25 MS: 1 ChangeBinInt- 00:08:06.172 [2024-11-05 10:35:32.103898] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (343360) > buf size (4096) 00:08:06.172 [2024-11-05 10:35:32.104178] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004f0a 00:08:06.172 [2024-11-05 10:35:32.104686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4f4f814f cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.172 [2024-11-05 10:35:32.104729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.172 [2024-11-05 10:35:32.104833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0a4f834f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.172 [2024-11-05 10:35:32.104856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.172 #38 NEW cov: 12535 ft: 15165 corp: 21/269b lim: 30 exec/s: 38 rss: 74Mb L: 12/25 MS: 1 ShuffleBytes- 00:08:06.172 [2024-11-05 10:35:32.164297] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (867648) > buf size (4096) 00:08:06.172 [2024-11-05 10:35:32.165588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4f4f834f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.172 [2024-11-05 10:35:32.165626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.172 [2024-11-05 10:35:32.165727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.172 [2024-11-05 10:35:32.165757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.172 [2024-11-05 10:35:32.165854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.172 [2024-11-05 10:35:32.165877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.172 [2024-11-05 10:35:32.165971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.172 [2024-11-05 10:35:32.165993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.172 #39 NEW cov: 12545 ft: 15245 corp: 22/296b lim: 30 exec/s: 39 rss: 74Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:08:06.430 [2024-11-05 10:35:32.264947] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004f4f 00:08:06.430 [2024-11-05 10:35:32.265232] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004f4f 00:08:06.430 [2024-11-05 10:35:32.265504] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa4f 00:08:06.430 [2024-11-05 10:35:32.266028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4f4f834f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.430 [2024-11-05 10:35:32.266065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.430 [2024-11-05 10:35:32.266170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0100834f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.430 [2024-11-05 10:35:32.266193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.430 [2024-11-05 10:35:32.266292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4f010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.430 [2024-11-05 10:35:32.266314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.430 #40 NEW cov: 12545 ft: 15264 corp: 23/318b lim: 30 exec/s: 20 rss: 74Mb L: 22/27 MS: 1 CopyPart- 00:08:06.430 #40 DONE cov: 12545 ft: 15264 corp: 23/318b lim: 30 exec/s: 20 rss: 74Mb 00:08:06.430 ###### Recommended dictionary. ###### 00:08:06.430 "\001\000\000\012" # Uses: 3 00:08:06.431 "\012\000\000\000\000\000\000\000" # Uses: 0 00:08:06.431 ###### End of recommended dictionary. ###### 00:08:06.431 Done 40 runs in 2 second(s) 00:08:06.431 10:35:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:08:06.431 10:35:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:06.431 10:35:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:06.431 10:35:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:06.431 10:35:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:08:06.431 10:35:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:06.431 10:35:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:06.431 10:35:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:06.431 10:35:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:08:06.431 10:35:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:06.431 10:35:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:06.431 10:35:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:08:06.431 10:35:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:08:06.431 10:35:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:06.431 10:35:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:08:06.431 10:35:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:06.431 10:35:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:06.431 10:35:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:06.431 10:35:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:08:06.431 [2024-11-05 10:35:32.475270] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:06.431 [2024-11-05 10:35:32.475326] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861052 ] 00:08:06.689 [2024-11-05 10:35:32.720337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.689 [2024-11-05 10:35:32.767672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.949 [2024-11-05 10:35:32.831634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.949 [2024-11-05 10:35:32.847862] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:08:06.949 INFO: Running with entropic power schedule (0xFF, 100). 00:08:06.949 INFO: Seed: 3035858341 00:08:06.949 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:06.949 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:06.949 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:06.949 INFO: A corpus is not provided, starting from an empty corpus 00:08:06.949 #2 INITED exec/s: 0 rss: 66Mb 00:08:06.949 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:06.949 This may also happen if the target rejected all inputs we tried so far 00:08:06.949 [2024-11-05 10:35:32.893670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff007e cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.949 [2024-11-05 10:35:32.893698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.949 [2024-11-05 10:35:32.893777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.949 [2024-11-05 10:35:32.893794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.208 NEW_FUNC[1/715]: 0x43ef78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:08:07.208 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:07.208 #5 NEW cov: 12228 ft: 12225 corp: 2/15b lim: 35 exec/s: 0 rss: 73Mb L: 14/14 MS: 3 ChangeByte-CrossOver-InsertRepeatedBytes- 00:08:07.208 [2024-11-05 10:35:33.214473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfff007e cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.208 [2024-11-05 10:35:33.214509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.208 [2024-11-05 10:35:33.214566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.208 [2024-11-05 10:35:33.214581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.208 #6 NEW cov: 12341 ft: 12668 corp: 3/29b lim: 35 exec/s: 0 rss: 73Mb L: 14/14 MS: 1 ChangeBit- 00:08:07.208 [2024-11-05 10:35:33.274721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff007e cdw11:df00ff7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.208 [2024-11-05 10:35:33.274749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.208 [2024-11-05 10:35:33.274803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.208 [2024-11-05 10:35:33.274818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.208 [2024-11-05 10:35:33.274870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.208 [2024-11-05 10:35:33.274888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.467 #7 NEW cov: 12347 ft: 13189 corp: 4/56b lim: 35 exec/s: 0 rss: 73Mb L: 27/27 MS: 1 CrossOver- 00:08:07.467 [2024-11-05 10:35:33.314370] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:07.467 [2024-11-05 10:35:33.314626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfff007e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.467 [2024-11-05 10:35:33.314653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.467 [2024-11-05 10:35:33.314707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.467 [2024-11-05 10:35:33.314728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.467 #13 NEW cov: 12443 ft: 13494 corp: 5/75b lim: 35 exec/s: 0 rss: 73Mb L: 19/27 MS: 1 InsertRepeatedBytes- 00:08:07.467 [2024-11-05 10:35:33.374795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfff007e cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.467 [2024-11-05 10:35:33.374821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.467 [2024-11-05 10:35:33.374875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.467 [2024-11-05 10:35:33.374890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.467 #14 NEW cov: 12443 ft: 13636 corp: 6/89b lim: 35 exec/s: 0 rss: 73Mb L: 14/27 MS: 1 ShuffleBytes- 00:08:07.467 [2024-11-05 10:35:33.414786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfff007e cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.467 [2024-11-05 10:35:33.414811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.467 #15 NEW cov: 12443 ft: 14047 corp: 7/99b lim: 35 exec/s: 0 rss: 73Mb L: 10/27 MS: 1 EraseBytes- 00:08:07.467 [2024-11-05 10:35:33.454887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfff007e cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.467 [2024-11-05 10:35:33.454911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.468 #16 NEW cov: 12443 ft: 14107 corp: 8/112b lim: 35 exec/s: 0 rss: 73Mb L: 13/27 MS: 1 InsertRepeatedBytes- 00:08:07.468 [2024-11-05 10:35:33.515192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff00007e cdw11:fe000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.468 [2024-11-05 10:35:33.515217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.468 [2024-11-05 10:35:33.515271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.468 [2024-11-05 10:35:33.515285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.468 #17 NEW cov: 12443 ft: 14156 corp: 9/126b lim: 35 exec/s: 0 rss: 73Mb L: 14/27 MS: 1 ChangeBinInt- 00:08:07.727 [2024-11-05 10:35:33.555306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff00007e cdw11:fe000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.727 [2024-11-05 10:35:33.555332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.727 [2024-11-05 10:35:33.555387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28ff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.727 [2024-11-05 10:35:33.555405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.727 #18 NEW cov: 12443 ft: 14252 corp: 10/141b lim: 35 exec/s: 0 rss: 73Mb L: 15/27 MS: 1 InsertByte- 00:08:07.727 [2024-11-05 10:35:33.615783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff007e cdw11:df00ff7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.727 [2024-11-05 10:35:33.615809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.727 [2024-11-05 10:35:33.615864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.727 [2024-11-05 10:35:33.615879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.727 [2024-11-05 10:35:33.615933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.727 [2024-11-05 10:35:33.615948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.727 [2024-11-05 10:35:33.616000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.727 [2024-11-05 10:35:33.616014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.727 #19 NEW cov: 12443 ft: 14834 corp: 11/172b lim: 35 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:08:07.727 [2024-11-05 10:35:33.675662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff007e cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.727 [2024-11-05 10:35:33.675687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.727 [2024-11-05 10:35:33.675742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.727 [2024-11-05 10:35:33.675756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.727 #20 NEW cov: 12443 ft: 14877 corp: 12/186b lim: 35 exec/s: 0 rss: 73Mb L: 14/31 MS: 1 ShuffleBytes- 00:08:07.727 [2024-11-05 10:35:33.715824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfff007e cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.727 [2024-11-05 10:35:33.715849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.727 [2024-11-05 10:35:33.715904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.727 [2024-11-05 10:35:33.715919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.727 #21 NEW cov: 12443 ft: 14938 corp: 13/200b lim: 35 exec/s: 0 rss: 73Mb L: 14/31 MS: 1 ShuffleBytes- 00:08:07.727 [2024-11-05 10:35:33.756062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff007e cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.727 [2024-11-05 10:35:33.756086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.727 [2024-11-05 10:35:33.756140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0600ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.727 [2024-11-05 10:35:33.756154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.727 NEW_FUNC[1/2]: 0x12fb618 in spdk_nvmf_ctrlr_identify_iocs_specific /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3177 00:08:07.727 NEW_FUNC[2/2]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:07.727 #22 NEW cov: 12483 ft: 15008 corp: 14/226b lim: 35 exec/s: 0 rss: 74Mb L: 26/31 MS: 1 InsertRepeatedBytes- 00:08:07.987 [2024-11-05 10:35:33.816099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff007e cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.987 [2024-11-05 10:35:33.816125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.987 [2024-11-05 10:35:33.816179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.987 [2024-11-05 10:35:33.816193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.987 #23 NEW cov: 12483 ft: 15025 corp: 15/240b lim: 35 exec/s: 0 rss: 74Mb L: 14/31 MS: 1 CopyPart- 00:08:07.987 [2024-11-05 10:35:33.856378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff007e cdw11:5f00ff7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.987 [2024-11-05 10:35:33.856406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.987 [2024-11-05 10:35:33.856464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.987 [2024-11-05 10:35:33.856479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.987 [2024-11-05 10:35:33.856533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.987 [2024-11-05 10:35:33.856548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.987 #24 NEW cov: 12483 ft: 15032 corp: 16/267b lim: 35 exec/s: 24 rss: 74Mb L: 27/31 MS: 1 ChangeBit- 00:08:07.987 [2024-11-05 10:35:33.896256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff007e cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.987 [2024-11-05 10:35:33.896282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.987 [2024-11-05 10:35:33.896338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.987 [2024-11-05 10:35:33.896353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.987 #25 NEW cov: 12483 ft: 15113 corp: 17/281b lim: 35 exec/s: 25 rss: 74Mb L: 14/31 MS: 1 ShuffleBytes- 00:08:07.987 [2024-11-05 10:35:33.936410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff00007e cdw11:fe000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.987 [2024-11-05 10:35:33.936435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.987 [2024-11-05 10:35:33.936490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28ff00ff cdw11:ff0032ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.987 [2024-11-05 10:35:33.936505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.987 #26 NEW cov: 12483 ft: 15149 corp: 18/297b lim: 35 exec/s: 26 rss: 74Mb L: 16/31 MS: 1 InsertByte- 00:08:07.987 [2024-11-05 10:35:33.996596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfff007e cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.987 [2024-11-05 10:35:33.996621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.987 [2024-11-05 10:35:33.996680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:9f9f00ff cdw11:9f009f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.987 [2024-11-05 10:35:33.996694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.987 #27 NEW cov: 12483 ft: 15164 corp: 19/317b lim: 35 exec/s: 27 rss: 74Mb L: 20/31 MS: 1 InsertRepeatedBytes- 00:08:07.987 [2024-11-05 10:35:34.036674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff00007e cdw11:7100003a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.987 [2024-11-05 10:35:34.036699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.987 [2024-11-05 10:35:34.036753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ea780090 cdw11:ff0024ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.987 [2024-11-05 10:35:34.036769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.987 #28 NEW cov: 12483 ft: 15185 corp: 20/331b lim: 35 exec/s: 28 rss: 74Mb L: 14/31 MS: 1 CMP- DE: "\000:q\271\220\352x$"- 00:08:08.247 [2024-11-05 10:35:34.076955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff007e cdw11:df00ff7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.247 [2024-11-05 10:35:34.076980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.247 [2024-11-05 10:35:34.077033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.247 [2024-11-05 10:35:34.077048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.247 [2024-11-05 10:35:34.077100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.247 [2024-11-05 10:35:34.077113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.247 #29 NEW cov: 12483 ft: 15192 corp: 21/358b lim: 35 exec/s: 29 rss: 74Mb L: 27/31 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:08:08.247 [2024-11-05 10:35:34.116924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfff007e cdw11:ff003fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.247 [2024-11-05 10:35:34.116949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.247 [2024-11-05 10:35:34.117003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.247 [2024-11-05 10:35:34.117019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.247 #30 NEW cov: 12483 ft: 15221 corp: 22/372b lim: 35 exec/s: 30 rss: 74Mb L: 14/31 MS: 1 ChangeByte- 00:08:08.247 [2024-11-05 10:35:34.176963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff007e cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.247 [2024-11-05 10:35:34.176989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.247 #31 NEW cov: 12483 ft: 15233 corp: 23/384b lim: 35 exec/s: 31 rss: 74Mb L: 12/31 MS: 1 EraseBytes- 00:08:08.247 [2024-11-05 10:35:34.217204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff00007e cdw11:fe000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.247 [2024-11-05 10:35:34.217231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.247 [2024-11-05 10:35:34.217289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28ff00ff cdw11:ff0032ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.248 [2024-11-05 10:35:34.217303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.248 #32 NEW cov: 12483 ft: 15244 corp: 24/400b lim: 35 exec/s: 32 rss: 74Mb L: 16/31 MS: 1 CrossOver- 00:08:08.248 [2024-11-05 10:35:34.277393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff00007e cdw11:fe000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.248 [2024-11-05 10:35:34.277419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.248 [2024-11-05 10:35:34.277473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff0032ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.248 [2024-11-05 10:35:34.277488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.248 #33 NEW cov: 12483 ft: 15270 corp: 25/416b lim: 35 exec/s: 33 rss: 74Mb L: 16/31 MS: 1 ShuffleBytes- 00:08:08.507 [2024-11-05 10:35:34.337593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfff007e cdw11:ff003fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.507 [2024-11-05 10:35:34.337620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.507 [2024-11-05 10:35:34.337675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.507 [2024-11-05 10:35:34.337690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.507 #34 NEW cov: 12483 ft: 15301 corp: 26/430b lim: 35 exec/s: 34 rss: 74Mb L: 14/31 MS: 1 ShuffleBytes- 00:08:08.507 [2024-11-05 10:35:34.397514] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:08.507 [2024-11-05 10:35:34.397775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0700007e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.507 [2024-11-05 10:35:34.397801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.507 [2024-11-05 10:35:34.397855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00ff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.507 [2024-11-05 10:35:34.397871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.507 #35 NEW cov: 12483 ft: 15321 corp: 27/444b lim: 35 exec/s: 35 rss: 74Mb L: 14/31 MS: 1 ChangeBinInt- 00:08:08.507 [2024-11-05 10:35:34.437919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.507 [2024-11-05 10:35:34.437946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.507 #36 NEW cov: 12483 ft: 15674 corp: 28/458b lim: 35 exec/s: 36 rss: 74Mb L: 14/31 MS: 1 CMP- DE: "\001\000"- 00:08:08.507 [2024-11-05 10:35:34.498030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0027 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.507 [2024-11-05 10:35:34.498056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.507 [2024-11-05 10:35:34.498110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.507 [2024-11-05 10:35:34.498124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.507 #37 NEW cov: 12483 ft: 15705 corp: 29/472b lim: 35 exec/s: 37 rss: 74Mb L: 14/31 MS: 1 ChangeByte- 00:08:08.507 [2024-11-05 10:35:34.538278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6c6c000a cdw11:6c006c6c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.507 [2024-11-05 10:35:34.538304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.507 [2024-11-05 10:35:34.538359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:6c6c006c cdw11:6c006c6c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.507 [2024-11-05 10:35:34.538373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.507 [2024-11-05 10:35:34.538425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:6c6c006c cdw11:6c006c6c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.507 [2024-11-05 10:35:34.538439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.507 #38 NEW cov: 12483 ft: 15731 corp: 30/496b lim: 35 exec/s: 38 rss: 74Mb L: 24/31 MS: 1 InsertRepeatedBytes- 00:08:08.507 [2024-11-05 10:35:34.578234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dfff007e cdw11:3a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.507 [2024-11-05 10:35:34.578260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.507 [2024-11-05 10:35:34.578317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.507 [2024-11-05 10:35:34.578332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.767 #39 NEW cov: 12483 ft: 15753 corp: 31/510b lim: 35 exec/s: 39 rss: 74Mb L: 14/31 MS: 1 CrossOver- 00:08:08.767 [2024-11-05 10:35:34.638320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff00007e cdw11:fe000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.767 [2024-11-05 10:35:34.638346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.767 #40 NEW cov: 12483 ft: 15819 corp: 32/518b lim: 35 exec/s: 40 rss: 74Mb L: 8/31 MS: 1 CrossOver- 00:08:08.767 [2024-11-05 10:35:34.678562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0027 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.767 [2024-11-05 10:35:34.678587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.767 [2024-11-05 10:35:34.678643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff0a00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.767 [2024-11-05 10:35:34.678657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.767 #41 NEW cov: 12483 ft: 15823 corp: 33/532b lim: 35 exec/s: 41 rss: 74Mb L: 14/31 MS: 1 CrossOver- 00:08:08.767 [2024-11-05 10:35:34.738694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff007e cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.767 [2024-11-05 10:35:34.738723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.767 [2024-11-05 10:35:34.738777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff0900ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.767 [2024-11-05 10:35:34.738791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.767 #42 NEW cov: 12483 ft: 15831 corp: 34/547b lim: 35 exec/s: 42 rss: 74Mb L: 15/31 MS: 1 InsertByte- 00:08:08.767 [2024-11-05 10:35:34.779108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff007e cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.767 [2024-11-05 10:35:34.779136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.767 [2024-11-05 10:35:34.779189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff7e00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.767 [2024-11-05 10:35:34.779204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.767 [2024-11-05 10:35:34.779255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.767 [2024-11-05 10:35:34.779269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.767 [2024-11-05 10:35:34.779323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:0aff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.767 [2024-11-05 10:35:34.779337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.767 #43 NEW cov: 12483 ft: 15838 corp: 35/575b lim: 35 exec/s: 43 rss: 74Mb L: 28/31 MS: 1 CopyPart- 00:08:08.767 [2024-11-05 10:35:34.819224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff00007e cdw11:fe000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.767 [2024-11-05 10:35:34.819249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.767 [2024-11-05 10:35:34.819305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28ff00ff cdw11:ff0032ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.767 [2024-11-05 10:35:34.819321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.767 [2024-11-05 10:35:34.819372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.767 [2024-11-05 10:35:34.819386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.767 [2024-11-05 10:35:34.819438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.767 [2024-11-05 10:35:34.819453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.767 #44 NEW cov: 12483 ft: 15882 corp: 36/608b lim: 35 exec/s: 44 rss: 74Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:08:09.027 [2024-11-05 10:35:34.858942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff007e cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.027 [2024-11-05 10:35:34.858968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.027 #45 NEW cov: 12483 ft: 15907 corp: 37/621b lim: 35 exec/s: 22 rss: 74Mb L: 13/33 MS: 1 EraseBytes- 00:08:09.027 #45 DONE cov: 12483 ft: 15907 corp: 37/621b lim: 35 exec/s: 22 rss: 74Mb 00:08:09.027 ###### Recommended dictionary. ###### 00:08:09.027 "\000:q\271\220\352x$" # Uses: 0 00:08:09.027 "\377\377\377\377\377\377\377\377" # Uses: 0 00:08:09.027 "\001\000" # Uses: 0 00:08:09.027 ###### End of recommended dictionary. ###### 00:08:09.027 Done 45 runs in 2 second(s) 00:08:09.027 10:35:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:08:09.027 10:35:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:09.027 10:35:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:09.027 10:35:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:09.027 10:35:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:08:09.027 10:35:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:09.027 10:35:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:09.027 10:35:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:09.027 10:35:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:08:09.027 10:35:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:09.027 10:35:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:09.027 10:35:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:08:09.027 10:35:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:08:09.027 10:35:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:09.027 10:35:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:08:09.027 10:35:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:09.027 10:35:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:09.027 10:35:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:09.027 10:35:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:08:09.027 [2024-11-05 10:35:35.049405] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:09.027 [2024-11-05 10:35:35.049474] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861413 ] 00:08:09.287 [2024-11-05 10:35:35.316755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.287 [2024-11-05 10:35:35.364309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.546 [2024-11-05 10:35:35.428188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.546 [2024-11-05 10:35:35.444432] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:08:09.546 INFO: Running with entropic power schedule (0xFF, 100). 00:08:09.546 INFO: Seed: 1336891511 00:08:09.546 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:09.546 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:09.546 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:09.546 INFO: A corpus is not provided, starting from an empty corpus 00:08:09.546 #2 INITED exec/s: 0 rss: 66Mb 00:08:09.546 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:09.546 This may also happen if the target rejected all inputs we tried so far 00:08:09.830 NEW_FUNC[1/704]: 0x440c58 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:08:09.830 NEW_FUNC[2/704]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:09.830 #11 NEW cov: 12139 ft: 12136 corp: 2/14b lim: 20 exec/s: 0 rss: 73Mb L: 13/13 MS: 4 ShuffleBytes-ShuffleBytes-CrossOver-InsertRepeatedBytes- 00:08:09.830 #12 NEW cov: 12253 ft: 12947 corp: 3/22b lim: 20 exec/s: 0 rss: 73Mb L: 8/13 MS: 1 EraseBytes- 00:08:09.830 #13 NEW cov: 12275 ft: 13421 corp: 4/42b lim: 20 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:08:10.089 #16 NEW cov: 12360 ft: 13747 corp: 5/53b lim: 20 exec/s: 0 rss: 73Mb L: 11/20 MS: 3 ChangeBit-ShuffleBytes-CrossOver- 00:08:10.089 #22 NEW cov: 12360 ft: 13807 corp: 6/67b lim: 20 exec/s: 0 rss: 73Mb L: 14/20 MS: 1 InsertByte- 00:08:10.090 #23 NEW cov: 12360 ft: 13882 corp: 7/80b lim: 20 exec/s: 0 rss: 73Mb L: 13/20 MS: 1 CrossOver- 00:08:10.090 #24 NEW cov: 12360 ft: 13946 corp: 8/94b lim: 20 exec/s: 0 rss: 73Mb L: 14/20 MS: 1 InsertByte- 00:08:10.090 #25 NEW cov: 12361 ft: 13984 corp: 9/112b lim: 20 exec/s: 0 rss: 73Mb L: 18/20 MS: 1 CopyPart- 00:08:10.349 #26 NEW cov: 12361 ft: 14079 corp: 10/130b lim: 20 exec/s: 0 rss: 73Mb L: 18/20 MS: 1 CrossOver- 00:08:10.349 #27 NEW cov: 12361 ft: 14116 corp: 11/144b lim: 20 exec/s: 0 rss: 73Mb L: 14/20 MS: 1 InsertByte- 00:08:10.349 #28 NEW cov: 12361 ft: 14134 corp: 12/156b lim: 20 exec/s: 0 rss: 73Mb L: 12/20 MS: 1 InsertByte- 00:08:10.349 #29 NEW cov: 12361 ft: 14166 corp: 13/174b lim: 20 exec/s: 0 rss: 73Mb L: 18/20 MS: 1 ChangeByte- 00:08:10.349 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:10.349 #30 NEW cov: 12384 ft: 14199 corp: 14/183b lim: 20 exec/s: 0 rss: 74Mb L: 9/20 MS: 1 InsertRepeatedBytes- 00:08:10.607 #33 NEW cov: 12384 ft: 14214 corp: 15/197b lim: 20 exec/s: 0 rss: 74Mb L: 14/20 MS: 3 InsertByte-ChangeByte-CrossOver- 00:08:10.607 #34 NEW cov: 12384 ft: 14284 corp: 16/215b lim: 20 exec/s: 34 rss: 74Mb L: 18/20 MS: 1 ChangeByte- 00:08:10.607 #37 NEW cov: 12384 ft: 14648 corp: 17/219b lim: 20 exec/s: 37 rss: 74Mb L: 4/20 MS: 3 CrossOver-ChangeBit-CopyPart- 00:08:10.607 #38 NEW cov: 12384 ft: 14697 corp: 18/228b lim: 20 exec/s: 38 rss: 74Mb L: 9/20 MS: 1 CrossOver- 00:08:10.607 #39 NEW cov: 12384 ft: 14715 corp: 19/245b lim: 20 exec/s: 39 rss: 74Mb L: 17/20 MS: 1 CrossOver- 00:08:10.607 [2024-11-05 10:35:36.683428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:10.607 [2024-11-05 10:35:36.683474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.865 NEW_FUNC[1/17]: 0x1366e78 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3482 00:08:10.865 NEW_FUNC[2/17]: 0x13679f8 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3424 00:08:10.865 #46 NEW cov: 12630 ft: 14988 corp: 20/254b lim: 20 exec/s: 46 rss: 74Mb L: 9/20 MS: 2 ChangeBit-CMP- DE: "\005\000\000\000\000\000\000\000"- 00:08:10.865 #48 NEW cov: 12630 ft: 14993 corp: 21/270b lim: 20 exec/s: 48 rss: 74Mb L: 16/20 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:10.865 #49 NEW cov: 12630 ft: 14999 corp: 22/281b lim: 20 exec/s: 49 rss: 74Mb L: 11/20 MS: 1 EraseBytes- 00:08:10.865 #50 NEW cov: 12630 ft: 15045 corp: 23/301b lim: 20 exec/s: 50 rss: 74Mb L: 20/20 MS: 1 ChangeBit- 00:08:10.865 [2024-11-05 10:35:36.864260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:10.865 [2024-11-05 10:35:36.864291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.865 NEW_FUNC[1/2]: 0x14dccd8 in nvmf_transport_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:784 00:08:10.865 NEW_FUNC[2/2]: 0x15041e8 in nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3702 00:08:10.865 #51 NEW cov: 12686 ft: 15150 corp: 24/319b lim: 20 exec/s: 51 rss: 74Mb L: 18/20 MS: 1 PersAutoDict- DE: "\005\000\000\000\000\000\000\000"- 00:08:10.865 #52 NEW cov: 12686 ft: 15152 corp: 25/328b lim: 20 exec/s: 52 rss: 74Mb L: 9/20 MS: 1 ShuffleBytes- 00:08:11.123 #53 NEW cov: 12686 ft: 15159 corp: 26/342b lim: 20 exec/s: 53 rss: 74Mb L: 14/20 MS: 1 ShuffleBytes- 00:08:11.123 [2024-11-05 10:35:37.004335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:11.123 [2024-11-05 10:35:37.004366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.123 #54 NEW cov: 12686 ft: 15180 corp: 27/351b lim: 20 exec/s: 54 rss: 74Mb L: 9/20 MS: 1 PersAutoDict- DE: "\005\000\000\000\000\000\000\000"- 00:08:11.123 #55 NEW cov: 12686 ft: 15200 corp: 28/365b lim: 20 exec/s: 55 rss: 74Mb L: 14/20 MS: 1 InsertRepeatedBytes- 00:08:11.123 #56 NEW cov: 12686 ft: 15204 corp: 29/372b lim: 20 exec/s: 56 rss: 74Mb L: 7/20 MS: 1 EraseBytes- 00:08:11.123 #57 NEW cov: 12686 ft: 15272 corp: 30/383b lim: 20 exec/s: 57 rss: 74Mb L: 11/20 MS: 1 ChangeByte- 00:08:11.381 #59 NEW cov: 12686 ft: 15278 corp: 31/390b lim: 20 exec/s: 59 rss: 74Mb L: 7/20 MS: 2 ChangeByte-ChangeBinInt- 00:08:11.381 [2024-11-05 10:35:37.285292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:11.381 [2024-11-05 10:35:37.285320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.381 #60 NEW cov: 12686 ft: 15337 corp: 32/404b lim: 20 exec/s: 60 rss: 74Mb L: 14/20 MS: 1 CopyPart- 00:08:11.381 #61 NEW cov: 12686 ft: 15346 corp: 33/421b lim: 20 exec/s: 61 rss: 74Mb L: 17/20 MS: 1 ChangeBit- 00:08:11.381 #62 NEW cov: 12686 ft: 15353 corp: 34/430b lim: 20 exec/s: 62 rss: 74Mb L: 9/20 MS: 1 ChangeByte- 00:08:11.639 #63 NEW cov: 12686 ft: 15359 corp: 35/444b lim: 20 exec/s: 63 rss: 74Mb L: 14/20 MS: 1 ChangeBit- 00:08:11.639 #64 pulse cov: 12686 ft: 15365 corp: 35/444b lim: 20 exec/s: 32 rss: 74Mb 00:08:11.639 #64 NEW cov: 12686 ft: 15365 corp: 36/462b lim: 20 exec/s: 32 rss: 74Mb L: 18/20 MS: 1 ChangeBinInt- 00:08:11.639 #64 DONE cov: 12686 ft: 15365 corp: 36/462b lim: 20 exec/s: 32 rss: 74Mb 00:08:11.639 ###### Recommended dictionary. ###### 00:08:11.639 "\005\000\000\000\000\000\000\000" # Uses: 2 00:08:11.639 ###### End of recommended dictionary. ###### 00:08:11.639 Done 64 runs in 2 second(s) 00:08:11.639 10:35:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:08:11.639 10:35:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:11.639 10:35:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:11.639 10:35:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:11.639 10:35:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:08:11.639 10:35:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:11.639 10:35:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:11.639 10:35:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:11.639 10:35:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:08:11.639 10:35:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:11.639 10:35:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:11.639 10:35:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:08:11.639 10:35:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:08:11.639 10:35:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:11.639 10:35:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:08:11.639 10:35:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:11.639 10:35:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:11.639 10:35:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:11.639 10:35:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:08:11.639 [2024-11-05 10:35:37.692372] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:11.639 [2024-11-05 10:35:37.692443] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861767 ] 00:08:11.898 [2024-11-05 10:35:37.962526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.156 [2024-11-05 10:35:38.011085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.156 [2024-11-05 10:35:38.074978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.156 [2024-11-05 10:35:38.091220] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:08:12.156 INFO: Running with entropic power schedule (0xFF, 100). 00:08:12.156 INFO: Seed: 3984891923 00:08:12.156 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:12.156 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:12.156 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:12.156 INFO: A corpus is not provided, starting from an empty corpus 00:08:12.156 #2 INITED exec/s: 0 rss: 66Mb 00:08:12.156 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:12.156 This may also happen if the target rejected all inputs we tried so far 00:08:12.156 [2024-11-05 10:35:38.137407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.156 [2024-11-05 10:35:38.137436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.156 [2024-11-05 10:35:38.137492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.156 [2024-11-05 10:35:38.137507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.156 [2024-11-05 10:35:38.137559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.156 [2024-11-05 10:35:38.137573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.156 [2024-11-05 10:35:38.137626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.156 [2024-11-05 10:35:38.137638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.414 NEW_FUNC[1/716]: 0x441d58 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:08:12.414 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:12.414 #5 NEW cov: 12249 ft: 12246 corp: 2/32b lim: 35 exec/s: 0 rss: 73Mb L: 31/31 MS: 3 CopyPart-ChangeBit-InsertRepeatedBytes- 00:08:12.414 [2024-11-05 10:35:38.458003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0b0a240a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.414 [2024-11-05 10:35:38.458040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.414 #7 NEW cov: 12362 ft: 13609 corp: 3/43b lim: 35 exec/s: 0 rss: 73Mb L: 11/31 MS: 2 InsertByte-CrossOver- 00:08:12.672 [2024-11-05 10:35:38.507760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8106 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.672 [2024-11-05 10:35:38.507789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.672 #15 NEW cov: 12368 ft: 13931 corp: 4/51b lim: 35 exec/s: 0 rss: 73Mb L: 8/31 MS: 3 ChangeBinInt-InsertByte-InsertRepeatedBytes- 00:08:12.672 [2024-11-05 10:35:38.548363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.672 [2024-11-05 10:35:38.548389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.672 [2024-11-05 10:35:38.548451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.672 [2024-11-05 10:35:38.548466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.672 [2024-11-05 10:35:38.548519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.672 [2024-11-05 10:35:38.548532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.672 [2024-11-05 10:35:38.548586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.672 [2024-11-05 10:35:38.548600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.672 #16 NEW cov: 12453 ft: 14180 corp: 5/84b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 CopyPart- 00:08:12.672 [2024-11-05 10:35:38.608362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8106 cdw11:ff7b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.672 [2024-11-05 10:35:38.608388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.672 [2024-11-05 10:35:38.608443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:7b7b7b7b cdw11:7b7b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.672 [2024-11-05 10:35:38.608457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.672 [2024-11-05 10:35:38.608510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:7b7b7b7b cdw11:7b7b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.672 [2024-11-05 10:35:38.608523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.672 #17 NEW cov: 12453 ft: 14563 corp: 6/111b lim: 35 exec/s: 0 rss: 73Mb L: 27/33 MS: 1 InsertRepeatedBytes- 00:08:12.672 [2024-11-05 10:35:38.668691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.672 [2024-11-05 10:35:38.668721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.672 [2024-11-05 10:35:38.668775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.672 [2024-11-05 10:35:38.668790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.672 [2024-11-05 10:35:38.668843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.672 [2024-11-05 10:35:38.668856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.672 [2024-11-05 10:35:38.668910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ed4ee6e6 cdw11:f0a30001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.672 [2024-11-05 10:35:38.668924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.672 #18 NEW cov: 12453 ft: 14646 corp: 7/144b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 CMP- DE: "\355N\360\243\273q:\000"- 00:08:12.672 [2024-11-05 10:35:38.728891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.672 [2024-11-05 10:35:38.728917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.672 [2024-11-05 10:35:38.728977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.672 [2024-11-05 10:35:38.728991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.672 [2024-11-05 10:35:38.729060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:191c1a19 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.672 [2024-11-05 10:35:38.729075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.672 [2024-11-05 10:35:38.729128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.672 [2024-11-05 10:35:38.729142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.931 #19 NEW cov: 12453 ft: 14715 corp: 8/175b lim: 35 exec/s: 0 rss: 73Mb L: 31/33 MS: 1 ChangeBinInt- 00:08:12.931 [2024-11-05 10:35:38.769027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0b0a240a cdw11:e6e60001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.931 [2024-11-05 10:35:38.769053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.931 [2024-11-05 10:35:38.769124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.931 [2024-11-05 10:35:38.769140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.931 [2024-11-05 10:35:38.769193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.931 [2024-11-05 10:35:38.769206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.931 [2024-11-05 10:35:38.769258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:a1a1a1a1 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.931 [2024-11-05 10:35:38.769270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.931 #20 NEW cov: 12453 ft: 14762 corp: 9/205b lim: 35 exec/s: 0 rss: 73Mb L: 30/33 MS: 1 InsertRepeatedBytes- 00:08:12.931 [2024-11-05 10:35:38.829167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.931 [2024-11-05 10:35:38.829193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.931 [2024-11-05 10:35:38.829247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.931 [2024-11-05 10:35:38.829262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.931 [2024-11-05 10:35:38.829313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:1919e61a cdw11:1ce60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.931 [2024-11-05 10:35:38.829327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.931 [2024-11-05 10:35:38.829378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.931 [2024-11-05 10:35:38.829391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.931 #21 NEW cov: 12453 ft: 14794 corp: 10/237b lim: 35 exec/s: 0 rss: 73Mb L: 32/33 MS: 1 InsertByte- 00:08:12.931 [2024-11-05 10:35:38.889007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.931 [2024-11-05 10:35:38.889032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.931 [2024-11-05 10:35:38.889087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.931 [2024-11-05 10:35:38.889101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.931 #22 NEW cov: 12453 ft: 15042 corp: 11/257b lim: 35 exec/s: 0 rss: 73Mb L: 20/33 MS: 1 EraseBytes- 00:08:12.931 [2024-11-05 10:35:38.949194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.931 [2024-11-05 10:35:38.949220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.931 [2024-11-05 10:35:38.949277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.931 [2024-11-05 10:35:38.949291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.931 #23 NEW cov: 12453 ft: 15056 corp: 12/277b lim: 35 exec/s: 0 rss: 73Mb L: 20/33 MS: 1 ChangeBit- 00:08:13.189 [2024-11-05 10:35:39.009725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.189 [2024-11-05 10:35:39.009754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.189 [2024-11-05 10:35:39.009809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.189 [2024-11-05 10:35:39.009823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.189 [2024-11-05 10:35:39.009875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffe6ff cdw11:fffe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.189 [2024-11-05 10:35:39.009889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.189 [2024-11-05 10:35:39.009941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e6e6ffff cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.189 [2024-11-05 10:35:39.009954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.189 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:13.189 #24 NEW cov: 12476 ft: 15095 corp: 13/310b lim: 35 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 CMP- DE: "\377\377\377\377\376\377\377\377"- 00:08:13.189 [2024-11-05 10:35:39.049446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.189 [2024-11-05 10:35:39.049472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.189 [2024-11-05 10:35:39.049530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6eee6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.189 [2024-11-05 10:35:39.049544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.189 #25 NEW cov: 12476 ft: 15124 corp: 14/330b lim: 35 exec/s: 0 rss: 74Mb L: 20/33 MS: 1 ChangeBinInt- 00:08:13.189 [2024-11-05 10:35:39.089926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.189 [2024-11-05 10:35:39.089955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.189 [2024-11-05 10:35:39.090010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e60ae6 cdw11:e6d20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.189 [2024-11-05 10:35:39.090024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.190 [2024-11-05 10:35:39.090076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.190 [2024-11-05 10:35:39.090090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.190 [2024-11-05 10:35:39.090144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e6e61a19 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.190 [2024-11-05 10:35:39.090158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.190 #31 NEW cov: 12476 ft: 15215 corp: 15/362b lim: 35 exec/s: 0 rss: 74Mb L: 32/33 MS: 1 CopyPart- 00:08:13.190 [2024-11-05 10:35:39.130031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:23e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.190 [2024-11-05 10:35:39.130056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.190 [2024-11-05 10:35:39.130110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.190 [2024-11-05 10:35:39.130124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.190 [2024-11-05 10:35:39.130175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.190 [2024-11-05 10:35:39.130188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.190 [2024-11-05 10:35:39.130240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e6ede6e6 cdw11:4ef00001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.190 [2024-11-05 10:35:39.130253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.190 #37 NEW cov: 12476 ft: 15276 corp: 16/396b lim: 35 exec/s: 37 rss: 74Mb L: 34/34 MS: 1 InsertByte- 00:08:13.190 [2024-11-05 10:35:39.190054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8106 cdw11:ff7b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.190 [2024-11-05 10:35:39.190079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.190 [2024-11-05 10:35:39.190133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:7b7b7b7b cdw11:7b7b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.190 [2024-11-05 10:35:39.190147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.190 [2024-11-05 10:35:39.190201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:7b7b7b7b cdw11:7b7b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.190 [2024-11-05 10:35:39.190214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.190 #38 NEW cov: 12476 ft: 15325 corp: 17/423b lim: 35 exec/s: 38 rss: 74Mb L: 27/34 MS: 1 ChangeBit- 00:08:13.190 [2024-11-05 10:35:39.250397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.190 [2024-11-05 10:35:39.250425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.190 [2024-11-05 10:35:39.250480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.190 [2024-11-05 10:35:39.250495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.190 [2024-11-05 10:35:39.250548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffe6ff cdw11:fffe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.190 [2024-11-05 10:35:39.250562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.190 [2024-11-05 10:35:39.250614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e67affff cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.190 [2024-11-05 10:35:39.250628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.448 #39 NEW cov: 12476 ft: 15331 corp: 18/456b lim: 35 exec/s: 39 rss: 74Mb L: 33/34 MS: 1 ChangeByte- 00:08:13.448 [2024-11-05 10:35:39.310571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.448 [2024-11-05 10:35:39.310597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.448 [2024-11-05 10:35:39.310651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.448 [2024-11-05 10:35:39.310665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.448 [2024-11-05 10:35:39.310719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:1919e61a cdw11:1ce60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.448 [2024-11-05 10:35:39.310734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.448 [2024-11-05 10:35:39.310785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e61ce6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.448 [2024-11-05 10:35:39.310799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.448 #40 NEW cov: 12476 ft: 15343 corp: 19/488b lim: 35 exec/s: 40 rss: 74Mb L: 32/34 MS: 1 CopyPart- 00:08:13.448 [2024-11-05 10:35:39.350684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0ae60b1d cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.448 [2024-11-05 10:35:39.350710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.448 [2024-11-05 10:35:39.350770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.448 [2024-11-05 10:35:39.350785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.448 [2024-11-05 10:35:39.350839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.448 [2024-11-05 10:35:39.350853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.448 [2024-11-05 10:35:39.350906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.448 [2024-11-05 10:35:39.350921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.448 #41 NEW cov: 12476 ft: 15373 corp: 20/520b lim: 35 exec/s: 41 rss: 74Mb L: 32/34 MS: 1 InsertByte- 00:08:13.448 [2024-11-05 10:35:39.390626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8106 cdw11:ff7b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.448 [2024-11-05 10:35:39.390652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.448 [2024-11-05 10:35:39.390707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:7b7b7b7b cdw11:7b7b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.448 [2024-11-05 10:35:39.390727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.448 [2024-11-05 10:35:39.390781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:7b7b7b7b cdw11:7bff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.448 [2024-11-05 10:35:39.390795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.449 #42 NEW cov: 12476 ft: 15379 corp: 21/542b lim: 35 exec/s: 42 rss: 74Mb L: 22/34 MS: 1 EraseBytes- 00:08:13.449 [2024-11-05 10:35:39.430568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.449 [2024-11-05 10:35:39.430593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.449 [2024-11-05 10:35:39.430648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.449 [2024-11-05 10:35:39.430662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.449 #43 NEW cov: 12476 ft: 15439 corp: 22/562b lim: 35 exec/s: 43 rss: 74Mb L: 20/34 MS: 1 ChangeBinInt- 00:08:13.449 [2024-11-05 10:35:39.491100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.449 [2024-11-05 10:35:39.491125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.449 [2024-11-05 10:35:39.491180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.449 [2024-11-05 10:35:39.491193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.449 [2024-11-05 10:35:39.491245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffe6ff cdw11:fffe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.449 [2024-11-05 10:35:39.491259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.449 [2024-11-05 10:35:39.491311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e67affff cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.449 [2024-11-05 10:35:39.491325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.709 #44 NEW cov: 12476 ft: 15453 corp: 23/595b lim: 35 exec/s: 44 rss: 74Mb L: 33/34 MS: 1 ShuffleBytes- 00:08:13.709 [2024-11-05 10:35:39.551241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.709 [2024-11-05 10:35:39.551266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.709 [2024-11-05 10:35:39.551319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e60ae6 cdw11:e6d20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.709 [2024-11-05 10:35:39.551336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.709 [2024-11-05 10:35:39.551388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.709 [2024-11-05 10:35:39.551402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.709 [2024-11-05 10:35:39.551454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e6e61a19 cdw11:e6000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.709 [2024-11-05 10:35:39.551468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.709 #45 NEW cov: 12476 ft: 15473 corp: 24/627b lim: 35 exec/s: 45 rss: 74Mb L: 32/34 MS: 1 ChangeByte- 00:08:13.709 [2024-11-05 10:35:39.611288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.709 [2024-11-05 10:35:39.611315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.709 [2024-11-05 10:35:39.611369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.709 [2024-11-05 10:35:39.611384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.710 [2024-11-05 10:35:39.611439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:191c1a19 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.710 [2024-11-05 10:35:39.611453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.710 #46 NEW cov: 12476 ft: 15482 corp: 25/653b lim: 35 exec/s: 46 rss: 74Mb L: 26/34 MS: 1 EraseBytes- 00:08:13.710 [2024-11-05 10:35:39.651362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8106 cdw11:ff7b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.710 [2024-11-05 10:35:39.651388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.710 [2024-11-05 10:35:39.651443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:7b7b7b7b cdw11:7b7b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.710 [2024-11-05 10:35:39.651457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.710 [2024-11-05 10:35:39.651512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:7b7b7b7b cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.710 [2024-11-05 10:35:39.651525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.710 #47 NEW cov: 12476 ft: 15508 corp: 26/674b lim: 35 exec/s: 47 rss: 74Mb L: 21/34 MS: 1 EraseBytes- 00:08:13.710 [2024-11-05 10:35:39.691820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.710 [2024-11-05 10:35:39.691846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.710 [2024-11-05 10:35:39.691897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.710 [2024-11-05 10:35:39.691911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.710 [2024-11-05 10:35:39.691963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.710 [2024-11-05 10:35:39.691980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.710 [2024-11-05 10:35:39.692034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ed4ee6e6 cdw11:f0a30001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.710 [2024-11-05 10:35:39.692048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.710 [2024-11-05 10:35:39.692100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00ed713a cdw11:4ee60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.710 [2024-11-05 10:35:39.692114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:13.710 #48 NEW cov: 12476 ft: 15579 corp: 27/709b lim: 35 exec/s: 48 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:08:13.710 [2024-11-05 10:35:39.731764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.710 [2024-11-05 10:35:39.731789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.710 [2024-11-05 10:35:39.731845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.710 [2024-11-05 10:35:39.731860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.710 [2024-11-05 10:35:39.731912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:1919e61a cdw11:1ce60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.710 [2024-11-05 10:35:39.731926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.710 [2024-11-05 10:35:39.731979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.710 [2024-11-05 10:35:39.731994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.710 #49 NEW cov: 12476 ft: 15586 corp: 28/741b lim: 35 exec/s: 49 rss: 74Mb L: 32/35 MS: 1 CopyPart- 00:08:13.710 [2024-11-05 10:35:39.771928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0ae6e60b cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.710 [2024-11-05 10:35:39.771954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.710 [2024-11-05 10:35:39.772025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.710 [2024-11-05 10:35:39.772039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.710 [2024-11-05 10:35:39.772091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.710 [2024-11-05 10:35:39.772105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.710 [2024-11-05 10:35:39.772156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.710 [2024-11-05 10:35:39.772170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.967 #50 NEW cov: 12476 ft: 15620 corp: 29/775b lim: 35 exec/s: 50 rss: 74Mb L: 34/35 MS: 1 CopyPart- 00:08:13.967 [2024-11-05 10:35:39.811806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:1919e61a cdw11:ff7b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.967 [2024-11-05 10:35:39.811836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.967 [2024-11-05 10:35:39.811890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:7b7b7b7b cdw11:7b7b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.967 [2024-11-05 10:35:39.811905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.967 [2024-11-05 10:35:39.811958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:7b7b7b7b cdw11:7b7b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.967 [2024-11-05 10:35:39.811971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.967 #51 NEW cov: 12476 ft: 15631 corp: 30/802b lim: 35 exec/s: 51 rss: 74Mb L: 27/35 MS: 1 CrossOver- 00:08:13.967 [2024-11-05 10:35:39.872343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.968 [2024-11-05 10:35:39.872369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.968 [2024-11-05 10:35:39.872425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2c2ce62c cdw11:2ce60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.968 [2024-11-05 10:35:39.872440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.968 [2024-11-05 10:35:39.872508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e6e6e6e6 cdw11:1a190000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.968 [2024-11-05 10:35:39.872523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.968 [2024-11-05 10:35:39.872577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e6e61ce6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.968 [2024-11-05 10:35:39.872590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.968 [2024-11-05 10:35:39.872642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.968 [2024-11-05 10:35:39.872656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:13.968 #52 NEW cov: 12476 ft: 15646 corp: 31/837b lim: 35 exec/s: 52 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:08:13.968 [2024-11-05 10:35:39.912340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.968 [2024-11-05 10:35:39.912366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.968 [2024-11-05 10:35:39.912422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e610e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.968 [2024-11-05 10:35:39.912436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.968 [2024-11-05 10:35:39.912487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.968 [2024-11-05 10:35:39.912500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.968 [2024-11-05 10:35:39.912553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.968 [2024-11-05 10:35:39.912566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.968 #53 NEW cov: 12476 ft: 15660 corp: 32/871b lim: 35 exec/s: 53 rss: 74Mb L: 34/35 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\020"- 00:08:13.968 [2024-11-05 10:35:39.972493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.968 [2024-11-05 10:35:39.972518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.968 [2024-11-05 10:35:39.972571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.968 [2024-11-05 10:35:39.972586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.968 [2024-11-05 10:35:39.972639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffe6ff cdw11:7ffe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.968 [2024-11-05 10:35:39.972652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.968 [2024-11-05 10:35:39.972705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e6e6ffff cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.968 [2024-11-05 10:35:39.972724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.968 #54 NEW cov: 12476 ft: 15665 corp: 33/904b lim: 35 exec/s: 54 rss: 74Mb L: 33/35 MS: 1 ChangeBit- 00:08:13.968 [2024-11-05 10:35:40.012187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0b0a240a cdw11:e6e60001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.968 [2024-11-05 10:35:40.012239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.968 [2024-11-05 10:35:40.012310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0ae6a10b cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.968 [2024-11-05 10:35:40.012326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.968 [2024-11-05 10:35:40.012380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:a1a1d20a cdw11:e6a10001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.968 [2024-11-05 10:35:40.012395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.226 #55 NEW cov: 12476 ft: 15734 corp: 34/925b lim: 35 exec/s: 55 rss: 74Mb L: 21/35 MS: 1 CrossOver- 00:08:14.226 [2024-11-05 10:35:40.072760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e6e60b0a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.226 [2024-11-05 10:35:40.072789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.226 [2024-11-05 10:35:40.072845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.226 [2024-11-05 10:35:40.072860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.226 [2024-11-05 10:35:40.072915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:19e6e61a cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.226 [2024-11-05 10:35:40.072929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.226 [2024-11-05 10:35:40.072983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e6e6e6e6 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.226 [2024-11-05 10:35:40.072997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.226 #56 NEW cov: 12476 ft: 15759 corp: 35/954b lim: 35 exec/s: 56 rss: 75Mb L: 29/35 MS: 1 EraseBytes- 00:08:14.226 [2024-11-05 10:35:40.132781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffe68106 cdw11:e6e60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.226 [2024-11-05 10:35:40.132809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.226 [2024-11-05 10:35:40.132864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:7b7be67b cdw11:7b7b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.226 [2024-11-05 10:35:40.132878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.226 [2024-11-05 10:35:40.132931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:7b7b7b7b cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.226 [2024-11-05 10:35:40.132945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.226 #57 NEW cov: 12476 ft: 15769 corp: 36/975b lim: 35 exec/s: 28 rss: 75Mb L: 21/35 MS: 1 CrossOver- 00:08:14.226 #57 DONE cov: 12476 ft: 15769 corp: 36/975b lim: 35 exec/s: 28 rss: 75Mb 00:08:14.226 ###### Recommended dictionary. ###### 00:08:14.226 "\355N\360\243\273q:\000" # Uses: 0 00:08:14.226 "\377\377\377\377\376\377\377\377" # Uses: 0 00:08:14.226 "\000\000\000\000\000\000\000\020" # Uses: 0 00:08:14.226 ###### End of recommended dictionary. ###### 00:08:14.226 Done 57 runs in 2 second(s) 00:08:14.226 10:35:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:08:14.484 10:35:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:14.484 10:35:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:14.484 10:35:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:14.484 10:35:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:08:14.484 10:35:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:14.484 10:35:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:14.484 10:35:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:14.484 10:35:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:08:14.484 10:35:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:14.484 10:35:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:14.484 10:35:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:08:14.484 10:35:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:08:14.484 10:35:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:14.484 10:35:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:08:14.484 10:35:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:14.484 10:35:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:14.484 10:35:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:14.484 10:35:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:08:14.484 [2024-11-05 10:35:40.329795] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:14.484 [2024-11-05 10:35:40.329854] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2862126 ] 00:08:14.743 [2024-11-05 10:35:40.579206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.743 [2024-11-05 10:35:40.627394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.743 [2024-11-05 10:35:40.691306] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.743 [2024-11-05 10:35:40.707532] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:08:14.743 INFO: Running with entropic power schedule (0xFF, 100). 00:08:14.743 INFO: Seed: 2305923549 00:08:14.743 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:14.743 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:14.743 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:14.743 INFO: A corpus is not provided, starting from an empty corpus 00:08:14.743 #2 INITED exec/s: 0 rss: 66Mb 00:08:14.743 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:14.743 This may also happen if the target rejected all inputs we tried so far 00:08:14.743 [2024-11-05 10:35:40.773370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:59590002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.743 [2024-11-05 10:35:40.773407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.261 NEW_FUNC[1/716]: 0x443ef8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:08:15.261 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:15.261 #9 NEW cov: 12242 ft: 12229 corp: 2/17b lim: 45 exec/s: 0 rss: 73Mb L: 16/16 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:08:15.261 [2024-11-05 10:35:41.124315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:59590002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.261 [2024-11-05 10:35:41.124369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.261 #10 NEW cov: 12372 ft: 12780 corp: 3/33b lim: 45 exec/s: 0 rss: 73Mb L: 16/16 MS: 1 CrossOver- 00:08:15.261 [2024-11-05 10:35:41.204847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.261 [2024-11-05 10:35:41.204882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.261 [2024-11-05 10:35:41.204948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.261 [2024-11-05 10:35:41.204967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.261 [2024-11-05 10:35:41.205032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.261 [2024-11-05 10:35:41.205051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.261 #16 NEW cov: 12378 ft: 13777 corp: 4/66b lim: 45 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:08:15.261 [2024-11-05 10:35:41.264933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.261 [2024-11-05 10:35:41.264968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.261 [2024-11-05 10:35:41.265034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.261 [2024-11-05 10:35:41.265059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.261 [2024-11-05 10:35:41.265122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.261 [2024-11-05 10:35:41.265141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.261 #17 NEW cov: 12463 ft: 14054 corp: 5/99b lim: 45 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeByte- 00:08:15.538 [2024-11-05 10:35:41.345361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:a4a40aa4 cdw11:a4a40005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.538 [2024-11-05 10:35:41.345394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.538 [2024-11-05 10:35:41.345460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:a4a4a4a4 cdw11:a4a40005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.539 [2024-11-05 10:35:41.345480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.539 [2024-11-05 10:35:41.345544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:a4a4a4a4 cdw11:a4a40005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.539 [2024-11-05 10:35:41.345563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.539 [2024-11-05 10:35:41.345628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:a4a4a4a4 cdw11:a4a40005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.539 [2024-11-05 10:35:41.345647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.539 #18 NEW cov: 12463 ft: 14434 corp: 6/143b lim: 45 exec/s: 0 rss: 73Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:08:15.539 [2024-11-05 10:35:41.394892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:59590002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.539 [2024-11-05 10:35:41.394926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.539 #19 NEW cov: 12463 ft: 14531 corp: 7/159b lim: 45 exec/s: 0 rss: 73Mb L: 16/44 MS: 1 ChangeByte- 00:08:15.539 [2024-11-05 10:35:41.445473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.539 [2024-11-05 10:35:41.445507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.539 [2024-11-05 10:35:41.445574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.539 [2024-11-05 10:35:41.445595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.539 [2024-11-05 10:35:41.445661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff3dffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.539 [2024-11-05 10:35:41.445680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.539 #20 NEW cov: 12463 ft: 14591 corp: 8/192b lim: 45 exec/s: 0 rss: 73Mb L: 33/44 MS: 1 ChangeByte- 00:08:15.539 [2024-11-05 10:35:41.525316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:59590002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.539 [2024-11-05 10:35:41.525349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.539 #21 NEW cov: 12463 ft: 14609 corp: 9/208b lim: 45 exec/s: 0 rss: 73Mb L: 16/44 MS: 1 ChangeByte- 00:08:15.539 [2024-11-05 10:35:41.575826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00004a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.539 [2024-11-05 10:35:41.575859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.539 [2024-11-05 10:35:41.575924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.539 [2024-11-05 10:35:41.575944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.539 [2024-11-05 10:35:41.576009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.539 [2024-11-05 10:35:41.576028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.857 #23 NEW cov: 12463 ft: 14667 corp: 10/237b lim: 45 exec/s: 0 rss: 73Mb L: 29/44 MS: 2 ChangeBit-InsertRepeatedBytes- 00:08:15.857 [2024-11-05 10:35:41.625974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:59590002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.857 [2024-11-05 10:35:41.626008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.857 [2024-11-05 10:35:41.626075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0a59590a cdw11:ffff0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.857 [2024-11-05 10:35:41.626095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.857 [2024-11-05 10:35:41.626159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:a4a4a4a4 cdw11:a4a40005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.857 [2024-11-05 10:35:41.626177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.857 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:15.857 #24 NEW cov: 12486 ft: 14737 corp: 11/264b lim: 45 exec/s: 0 rss: 73Mb L: 27/44 MS: 1 CrossOver- 00:08:15.857 [2024-11-05 10:35:41.706383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:a4a40aa4 cdw11:a4a40005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.857 [2024-11-05 10:35:41.706418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.857 [2024-11-05 10:35:41.706484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:a4a4a4a4 cdw11:a4a40005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.857 [2024-11-05 10:35:41.706504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.857 [2024-11-05 10:35:41.706567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:a4a4a4a4 cdw11:a4a40005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.857 [2024-11-05 10:35:41.706586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.857 [2024-11-05 10:35:41.706648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:85a4a4a4 cdw11:a4a40005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.857 [2024-11-05 10:35:41.706667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.857 #25 NEW cov: 12486 ft: 14769 corp: 12/308b lim: 45 exec/s: 25 rss: 73Mb L: 44/44 MS: 1 ChangeByte- 00:08:15.857 [2024-11-05 10:35:41.786399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:59590002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.857 [2024-11-05 10:35:41.786436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.857 [2024-11-05 10:35:41.786504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0a59590a cdw11:ffff0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.857 [2024-11-05 10:35:41.786524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.857 [2024-11-05 10:35:41.786588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:a4a4a4a4 cdw11:a4a40005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.857 [2024-11-05 10:35:41.786606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.857 #31 NEW cov: 12486 ft: 14796 corp: 13/335b lim: 45 exec/s: 31 rss: 74Mb L: 27/44 MS: 1 ShuffleBytes- 00:08:15.857 [2024-11-05 10:35:41.866321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:59590002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.857 [2024-11-05 10:35:41.866355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.146 #32 NEW cov: 12486 ft: 14834 corp: 14/351b lim: 45 exec/s: 32 rss: 74Mb L: 16/44 MS: 1 ShuffleBytes- 00:08:16.146 [2024-11-05 10:35:41.946922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00004a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.146 [2024-11-05 10:35:41.946956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.146 [2024-11-05 10:35:41.947021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.146 [2024-11-05 10:35:41.947040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.146 [2024-11-05 10:35:41.947101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.146 [2024-11-05 10:35:41.947121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.146 #33 NEW cov: 12486 ft: 14892 corp: 15/380b lim: 45 exec/s: 33 rss: 74Mb L: 29/44 MS: 1 ChangeByte- 00:08:16.146 [2024-11-05 10:35:42.027073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.146 [2024-11-05 10:35:42.027106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.146 [2024-11-05 10:35:42.027172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.146 [2024-11-05 10:35:42.027192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.146 [2024-11-05 10:35:42.027255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff3dffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.146 [2024-11-05 10:35:42.027274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.146 #34 NEW cov: 12486 ft: 14924 corp: 16/413b lim: 45 exec/s: 34 rss: 74Mb L: 33/44 MS: 1 ChangeByte- 00:08:16.147 [2024-11-05 10:35:42.107502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:59590002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.147 [2024-11-05 10:35:42.107537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.147 [2024-11-05 10:35:42.107607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:59005959 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.147 [2024-11-05 10:35:42.107627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.147 [2024-11-05 10:35:42.107692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.147 [2024-11-05 10:35:42.107711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.147 [2024-11-05 10:35:42.107781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.147 [2024-11-05 10:35:42.107800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.147 #35 NEW cov: 12486 ft: 14947 corp: 17/454b lim: 45 exec/s: 35 rss: 74Mb L: 41/44 MS: 1 InsertRepeatedBytes- 00:08:16.147 [2024-11-05 10:35:42.187724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.147 [2024-11-05 10:35:42.187758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.147 [2024-11-05 10:35:42.187824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.147 [2024-11-05 10:35:42.187844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.147 [2024-11-05 10:35:42.187909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff3dffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.147 [2024-11-05 10:35:42.187927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.147 [2024-11-05 10:35:42.187991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.147 [2024-11-05 10:35:42.188010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.412 #36 NEW cov: 12486 ft: 14985 corp: 18/490b lim: 45 exec/s: 36 rss: 74Mb L: 36/44 MS: 1 InsertRepeatedBytes- 00:08:16.412 [2024-11-05 10:35:42.237467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00004a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.412 [2024-11-05 10:35:42.237501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.412 [2024-11-05 10:35:42.237570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.412 [2024-11-05 10:35:42.237590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.412 #37 NEW cov: 12486 ft: 15274 corp: 19/511b lim: 45 exec/s: 37 rss: 74Mb L: 21/44 MS: 1 EraseBytes- 00:08:16.412 [2024-11-05 10:35:42.288017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.412 [2024-11-05 10:35:42.288051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.412 [2024-11-05 10:35:42.288117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffff76 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.412 [2024-11-05 10:35:42.288137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.412 [2024-11-05 10:35:42.288205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.412 [2024-11-05 10:35:42.288225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.412 [2024-11-05 10:35:42.288290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.412 [2024-11-05 10:35:42.288309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.412 #38 NEW cov: 12486 ft: 15282 corp: 20/548b lim: 45 exec/s: 38 rss: 74Mb L: 37/44 MS: 1 CopyPart- 00:08:16.412 [2024-11-05 10:35:42.338134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.412 [2024-11-05 10:35:42.338167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.412 [2024-11-05 10:35:42.338234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.412 [2024-11-05 10:35:42.338253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.412 [2024-11-05 10:35:42.338318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff3dffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.412 [2024-11-05 10:35:42.338338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.412 [2024-11-05 10:35:42.338399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:99a371bd cdw11:43f60007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.412 [2024-11-05 10:35:42.338418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.412 #39 NEW cov: 12486 ft: 15322 corp: 21/589b lim: 45 exec/s: 39 rss: 74Mb L: 41/44 MS: 1 CMP- DE: "\000:q\275\231\243C\366"- 00:08:16.412 [2024-11-05 10:35:42.418446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.412 [2024-11-05 10:35:42.418480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.412 [2024-11-05 10:35:42.418547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.412 [2024-11-05 10:35:42.418568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.412 [2024-11-05 10:35:42.418631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff3dffff cdw11:ff000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.412 [2024-11-05 10:35:42.418650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.412 [2024-11-05 10:35:42.418719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:43f699a3 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.412 [2024-11-05 10:35:42.418739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.412 #40 NEW cov: 12486 ft: 15385 corp: 22/625b lim: 45 exec/s: 40 rss: 74Mb L: 36/44 MS: 1 PersAutoDict- DE: "\000:q\275\231\243C\366"- 00:08:16.672 [2024-11-05 10:35:42.498036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:59593d59 cdw11:59590002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-11-05 10:35:42.498070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.672 #41 NEW cov: 12486 ft: 15394 corp: 23/641b lim: 45 exec/s: 41 rss: 74Mb L: 16/44 MS: 1 ChangeByte- 00:08:16.672 [2024-11-05 10:35:42.548738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-11-05 10:35:42.548772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.672 [2024-11-05 10:35:42.548837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-11-05 10:35:42.548857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.672 [2024-11-05 10:35:42.548920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff3dffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-11-05 10:35:42.548940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.672 [2024-11-05 10:35:42.549002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:99a371bd cdw11:43f60007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-11-05 10:35:42.549021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.672 #42 NEW cov: 12486 ft: 15396 corp: 24/685b lim: 45 exec/s: 42 rss: 74Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:08:16.672 [2024-11-05 10:35:42.628774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-11-05 10:35:42.628808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.672 [2024-11-05 10:35:42.628874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-11-05 10:35:42.628894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.672 [2024-11-05 10:35:42.628957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff3dffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-11-05 10:35:42.628976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.672 #43 NEW cov: 12486 ft: 15420 corp: 25/718b lim: 45 exec/s: 43 rss: 74Mb L: 33/44 MS: 1 ChangeBit- 00:08:16.672 [2024-11-05 10:35:42.678901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:59590002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-11-05 10:35:42.678933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.672 [2024-11-05 10:35:42.678999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0a59590a cdw11:ffff0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-11-05 10:35:42.679019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.672 [2024-11-05 10:35:42.679081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:2f2fa42f cdw11:2f2f0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-11-05 10:35:42.679100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.672 #49 NEW cov: 12486 ft: 15437 corp: 26/753b lim: 45 exec/s: 49 rss: 74Mb L: 35/44 MS: 1 InsertRepeatedBytes- 00:08:16.934 [2024-11-05 10:35:42.759331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:59595959 cdw11:59590002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.934 [2024-11-05 10:35:42.759369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.934 [2024-11-05 10:35:42.759437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:59005959 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.934 [2024-11-05 10:35:42.759457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.934 [2024-11-05 10:35:42.759523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.934 [2024-11-05 10:35:42.759541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.934 [2024-11-05 10:35:42.759606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.934 [2024-11-05 10:35:42.759625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.934 #50 NEW cov: 12486 ft: 15453 corp: 27/797b lim: 45 exec/s: 25 rss: 74Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:08:16.934 #50 DONE cov: 12486 ft: 15453 corp: 27/797b lim: 45 exec/s: 25 rss: 74Mb 00:08:16.934 ###### Recommended dictionary. ###### 00:08:16.934 "\000:q\275\231\243C\366" # Uses: 1 00:08:16.934 ###### End of recommended dictionary. ###### 00:08:16.934 Done 50 runs in 2 second(s) 00:08:16.934 10:35:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:08:16.934 10:35:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:16.934 10:35:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:16.934 10:35:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:16.934 10:35:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:08:16.934 10:35:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:16.934 10:35:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:16.934 10:35:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:08:16.934 10:35:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:08:16.934 10:35:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:16.934 10:35:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:16.934 10:35:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:08:16.934 10:35:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:08:16.934 10:35:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:08:16.934 10:35:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:08:16.934 10:35:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:16.934 10:35:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:16.934 10:35:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:16.934 10:35:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:08:16.934 [2024-11-05 10:35:42.991772] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:16.934 [2024-11-05 10:35:42.991852] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2862490 ] 00:08:17.201 [2024-11-05 10:35:43.267482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.463 [2024-11-05 10:35:43.315571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.463 [2024-11-05 10:35:43.379454] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.464 [2024-11-05 10:35:43.395689] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:08:17.464 INFO: Running with entropic power schedule (0xFF, 100). 00:08:17.464 INFO: Seed: 698957186 00:08:17.464 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:17.464 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:17.464 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:08:17.464 INFO: A corpus is not provided, starting from an empty corpus 00:08:17.464 #2 INITED exec/s: 0 rss: 66Mb 00:08:17.464 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:17.464 This may also happen if the target rejected all inputs we tried so far 00:08:17.464 [2024-11-05 10:35:43.441355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a29 cdw11:00000000 00:08:17.464 [2024-11-05 10:35:43.441383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.723 NEW_FUNC[1/714]: 0x446708 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:08:17.723 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:17.723 #4 NEW cov: 12177 ft: 12170 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 2 CrossOver-InsertByte- 00:08:17.723 [2024-11-05 10:35:43.762135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00006029 cdw11:00000000 00:08:17.723 [2024-11-05 10:35:43.762172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.982 #5 NEW cov: 12290 ft: 12730 corp: 3/5b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ChangeByte- 00:08:17.982 [2024-11-05 10:35:43.822198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e29 cdw11:00000000 00:08:17.982 [2024-11-05 10:35:43.822225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.982 #6 NEW cov: 12296 ft: 13024 corp: 4/7b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ChangeBinInt- 00:08:17.982 [2024-11-05 10:35:43.862299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ba0a cdw11:00000000 00:08:17.982 [2024-11-05 10:35:43.862325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.982 #8 NEW cov: 12381 ft: 13342 corp: 5/9b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 2 ShuffleBytes-InsertByte- 00:08:17.982 [2024-11-05 10:35:43.902447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:17.982 [2024-11-05 10:35:43.902472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.982 #9 NEW cov: 12381 ft: 13447 corp: 6/11b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CrossOver- 00:08:17.982 [2024-11-05 10:35:43.942777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a29 cdw11:00000000 00:08:17.982 [2024-11-05 10:35:43.942802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.982 [2024-11-05 10:35:43.942871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:17.982 [2024-11-05 10:35:43.942886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.982 [2024-11-05 10:35:43.942945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:17.982 [2024-11-05 10:35:43.942959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.982 #10 NEW cov: 12381 ft: 13796 corp: 7/17b lim: 10 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:08:17.982 [2024-11-05 10:35:43.982648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000ab8 cdw11:00000000 00:08:17.982 [2024-11-05 10:35:43.982674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.982 #11 NEW cov: 12381 ft: 13887 corp: 8/19b lim: 10 exec/s: 0 rss: 73Mb L: 2/6 MS: 1 InsertByte- 00:08:17.982 [2024-11-05 10:35:44.022766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b8b8 cdw11:00000000 00:08:17.982 [2024-11-05 10:35:44.022792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.241 #12 NEW cov: 12381 ft: 13925 corp: 9/21b lim: 10 exec/s: 0 rss: 73Mb L: 2/6 MS: 1 CrossOver- 00:08:18.241 [2024-11-05 10:35:44.083234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000ee6 cdw11:00000000 00:08:18.241 [2024-11-05 10:35:44.083258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.241 [2024-11-05 10:35:44.083313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e6e6 cdw11:00000000 00:08:18.241 [2024-11-05 10:35:44.083327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.241 [2024-11-05 10:35:44.083380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e629 cdw11:00000000 00:08:18.241 [2024-11-05 10:35:44.083393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.241 #13 NEW cov: 12381 ft: 13960 corp: 10/27b lim: 10 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:08:18.241 [2024-11-05 10:35:44.143525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e29 cdw11:00000000 00:08:18.241 [2024-11-05 10:35:44.143550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.241 [2024-11-05 10:35:44.143604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:18.241 [2024-11-05 10:35:44.143618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.241 [2024-11-05 10:35:44.143671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:18.241 [2024-11-05 10:35:44.143685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.241 [2024-11-05 10:35:44.143741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:08:18.241 [2024-11-05 10:35:44.143755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.241 #14 NEW cov: 12381 ft: 14211 corp: 11/36b lim: 10 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:08:18.241 [2024-11-05 10:35:44.183215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a29 cdw11:00000000 00:08:18.242 [2024-11-05 10:35:44.183240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.242 #15 NEW cov: 12381 ft: 14228 corp: 12/38b lim: 10 exec/s: 0 rss: 73Mb L: 2/9 MS: 1 ShuffleBytes- 00:08:18.242 [2024-11-05 10:35:44.223310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e29 cdw11:00000000 00:08:18.242 [2024-11-05 10:35:44.223338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.242 #16 NEW cov: 12381 ft: 14262 corp: 13/40b lim: 10 exec/s: 0 rss: 73Mb L: 2/9 MS: 1 ShuffleBytes- 00:08:18.242 [2024-11-05 10:35:44.263689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001c1c cdw11:00000000 00:08:18.242 [2024-11-05 10:35:44.263717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.242 [2024-11-05 10:35:44.263788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001c1c cdw11:00000000 00:08:18.242 [2024-11-05 10:35:44.263806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.242 [2024-11-05 10:35:44.263858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00001c0e cdw11:00000000 00:08:18.242 [2024-11-05 10:35:44.263871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.242 #17 NEW cov: 12381 ft: 14307 corp: 14/47b lim: 10 exec/s: 0 rss: 73Mb L: 7/9 MS: 1 InsertRepeatedBytes- 00:08:18.242 [2024-11-05 10:35:44.304088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e29 cdw11:00000000 00:08:18.242 [2024-11-05 10:35:44.304113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.242 [2024-11-05 10:35:44.304182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:18.242 [2024-11-05 10:35:44.304196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.242 [2024-11-05 10:35:44.304251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:18.242 [2024-11-05 10:35:44.304265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.242 [2024-11-05 10:35:44.304320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:18.242 [2024-11-05 10:35:44.304334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.242 [2024-11-05 10:35:44.304387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff28 cdw11:00000000 00:08:18.242 [2024-11-05 10:35:44.304401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:18.501 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:18.501 #18 NEW cov: 12404 ft: 14395 corp: 15/57b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 CMP- DE: "\377\377\377\377\377\377\377("- 00:08:18.501 [2024-11-05 10:35:44.344218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:18.501 [2024-11-05 10:35:44.344243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.501 [2024-11-05 10:35:44.344299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:18.501 [2024-11-05 10:35:44.344313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.501 [2024-11-05 10:35:44.344365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:18.501 [2024-11-05 10:35:44.344378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.501 [2024-11-05 10:35:44.344434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff28 cdw11:00000000 00:08:18.501 [2024-11-05 10:35:44.344448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.501 [2024-11-05 10:35:44.344498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000e29 cdw11:00000000 00:08:18.501 [2024-11-05 10:35:44.344511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:18.501 #19 NEW cov: 12404 ft: 14435 corp: 16/67b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377("- 00:08:18.501 [2024-11-05 10:35:44.404377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a29 cdw11:00000000 00:08:18.501 [2024-11-05 10:35:44.404402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.501 [2024-11-05 10:35:44.404456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:18.501 [2024-11-05 10:35:44.404471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.501 [2024-11-05 10:35:44.404525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:18.501 [2024-11-05 10:35:44.404538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.501 [2024-11-05 10:35:44.404591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:18.501 [2024-11-05 10:35:44.404604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.501 [2024-11-05 10:35:44.404657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:18.501 [2024-11-05 10:35:44.404670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:18.501 #20 NEW cov: 12404 ft: 14444 corp: 17/77b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:08:18.501 [2024-11-05 10:35:44.443909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000d29 cdw11:00000000 00:08:18.501 [2024-11-05 10:35:44.443934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.501 #21 NEW cov: 12404 ft: 14497 corp: 18/79b lim: 10 exec/s: 21 rss: 73Mb L: 2/10 MS: 1 ChangeBinInt- 00:08:18.501 [2024-11-05 10:35:44.484576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e29 cdw11:00000000 00:08:18.501 [2024-11-05 10:35:44.484601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.501 [2024-11-05 10:35:44.484671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:18.501 [2024-11-05 10:35:44.484685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.501 [2024-11-05 10:35:44.484736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:18.502 [2024-11-05 10:35:44.484750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.502 [2024-11-05 10:35:44.484813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000bfff cdw11:00000000 00:08:18.502 [2024-11-05 10:35:44.484827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.502 [2024-11-05 10:35:44.484884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff28 cdw11:00000000 00:08:18.502 [2024-11-05 10:35:44.484898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:18.502 #22 NEW cov: 12404 ft: 14507 corp: 19/89b lim: 10 exec/s: 22 rss: 74Mb L: 10/10 MS: 1 ChangeBit- 00:08:18.502 [2024-11-05 10:35:44.544226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a7b cdw11:00000000 00:08:18.502 [2024-11-05 10:35:44.544251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.761 #23 NEW cov: 12404 ft: 14523 corp: 20/92b lim: 10 exec/s: 23 rss: 74Mb L: 3/10 MS: 1 InsertByte- 00:08:18.761 [2024-11-05 10:35:44.604382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ba0a cdw11:00000000 00:08:18.761 [2024-11-05 10:35:44.604407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.761 #24 NEW cov: 12404 ft: 14549 corp: 21/95b lim: 10 exec/s: 24 rss: 74Mb L: 3/10 MS: 1 InsertByte- 00:08:18.761 [2024-11-05 10:35:44.664882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b8b8 cdw11:00000000 00:08:18.761 [2024-11-05 10:35:44.664908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.761 [2024-11-05 10:35:44.664961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:18.761 [2024-11-05 10:35:44.664975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.761 [2024-11-05 10:35:44.665030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:18.761 [2024-11-05 10:35:44.665044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.761 #25 NEW cov: 12404 ft: 14562 corp: 22/102b lim: 10 exec/s: 25 rss: 74Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:08:18.761 [2024-11-05 10:35:44.724741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a29 cdw11:00000000 00:08:18.761 [2024-11-05 10:35:44.724766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.761 #26 NEW cov: 12404 ft: 14578 corp: 23/104b lim: 10 exec/s: 26 rss: 74Mb L: 2/10 MS: 1 ShuffleBytes- 00:08:18.761 [2024-11-05 10:35:44.765101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000ee6 cdw11:00000000 00:08:18.761 [2024-11-05 10:35:44.765126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.761 [2024-11-05 10:35:44.765196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e6e6 cdw11:00000000 00:08:18.761 [2024-11-05 10:35:44.765211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.761 [2024-11-05 10:35:44.765264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e669 cdw11:00000000 00:08:18.761 [2024-11-05 10:35:44.765277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.761 #27 NEW cov: 12404 ft: 14620 corp: 24/110b lim: 10 exec/s: 27 rss: 74Mb L: 6/10 MS: 1 ChangeBit- 00:08:18.761 [2024-11-05 10:35:44.825306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b8bf cdw11:00000000 00:08:18.761 [2024-11-05 10:35:44.825330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.761 [2024-11-05 10:35:44.825386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:18.761 [2024-11-05 10:35:44.825403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.761 [2024-11-05 10:35:44.825456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:18.761 [2024-11-05 10:35:44.825469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.020 #28 NEW cov: 12404 ft: 14622 corp: 25/117b lim: 10 exec/s: 28 rss: 74Mb L: 7/10 MS: 1 ChangeBinInt- 00:08:19.021 [2024-11-05 10:35:44.885178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a3d cdw11:00000000 00:08:19.021 [2024-11-05 10:35:44.885204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.021 #29 NEW cov: 12404 ft: 14635 corp: 26/119b lim: 10 exec/s: 29 rss: 74Mb L: 2/10 MS: 1 InsertByte- 00:08:19.021 [2024-11-05 10:35:44.925434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000ee6 cdw11:00000000 00:08:19.021 [2024-11-05 10:35:44.925460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.021 [2024-11-05 10:35:44.925513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e669 cdw11:00000000 00:08:19.021 [2024-11-05 10:35:44.925528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.021 #30 NEW cov: 12404 ft: 14808 corp: 27/123b lim: 10 exec/s: 30 rss: 74Mb L: 4/10 MS: 1 EraseBytes- 00:08:19.021 [2024-11-05 10:35:44.985623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e0e cdw11:00000000 00:08:19.021 [2024-11-05 10:35:44.985649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.021 [2024-11-05 10:35:44.985723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000ea9 cdw11:00000000 00:08:19.021 [2024-11-05 10:35:44.985738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.021 #35 NEW cov: 12404 ft: 14814 corp: 28/127b lim: 10 exec/s: 35 rss: 74Mb L: 4/10 MS: 5 ChangeBinInt-ChangeByte-ChangeByte-ChangeBinInt-InsertRepeatedBytes- 00:08:19.021 [2024-11-05 10:35:45.026016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000eff cdw11:00000000 00:08:19.021 [2024-11-05 10:35:45.026042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.021 [2024-11-05 10:35:45.026112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:19.021 [2024-11-05 10:35:45.026127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.021 [2024-11-05 10:35:45.026182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e6e6 cdw11:00000000 00:08:19.021 [2024-11-05 10:35:45.026195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.021 [2024-11-05 10:35:45.026248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000e6e6 cdw11:00000000 00:08:19.021 [2024-11-05 10:35:45.026262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.021 #36 NEW cov: 12404 ft: 14846 corp: 29/136b lim: 10 exec/s: 36 rss: 74Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:08:19.021 [2024-11-05 10:35:45.065708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000602d cdw11:00000000 00:08:19.021 [2024-11-05 10:35:45.065737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.280 #37 NEW cov: 12404 ft: 14901 corp: 30/138b lim: 10 exec/s: 37 rss: 74Mb L: 2/10 MS: 1 ChangeBit- 00:08:19.280 [2024-11-05 10:35:45.126172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ece3 cdw11:00000000 00:08:19.280 [2024-11-05 10:35:45.126197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.280 [2024-11-05 10:35:45.126251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e3e3 cdw11:00000000 00:08:19.280 [2024-11-05 10:35:45.126265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.280 [2024-11-05 10:35:45.126318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00001c0e cdw11:00000000 00:08:19.280 [2024-11-05 10:35:45.126332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.280 #38 NEW cov: 12404 ft: 14904 corp: 31/145b lim: 10 exec/s: 38 rss: 74Mb L: 7/10 MS: 1 ChangeBinInt- 00:08:19.280 [2024-11-05 10:35:45.186094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ba0a cdw11:00000000 00:08:19.280 [2024-11-05 10:35:45.186120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.280 #39 NEW cov: 12404 ft: 14925 corp: 32/148b lim: 10 exec/s: 39 rss: 74Mb L: 3/10 MS: 1 CrossOver- 00:08:19.280 [2024-11-05 10:35:45.246399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e7f cdw11:00000000 00:08:19.280 [2024-11-05 10:35:45.246425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.280 [2024-11-05 10:35:45.246492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007f7f cdw11:00000000 00:08:19.280 [2024-11-05 10:35:45.246507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.280 #40 NEW cov: 12404 ft: 14963 corp: 33/153b lim: 10 exec/s: 40 rss: 74Mb L: 5/10 MS: 1 InsertRepeatedBytes- 00:08:19.280 [2024-11-05 10:35:45.286901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000eff cdw11:00000000 00:08:19.280 [2024-11-05 10:35:45.286927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.280 [2024-11-05 10:35:45.286995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:19.280 [2024-11-05 10:35:45.287010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.280 [2024-11-05 10:35:45.287062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000029ff cdw11:00000000 00:08:19.280 [2024-11-05 10:35:45.287076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.280 [2024-11-05 10:35:45.287128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000bfff cdw11:00000000 00:08:19.280 [2024-11-05 10:35:45.287142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.280 [2024-11-05 10:35:45.287196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff28 cdw11:00000000 00:08:19.280 [2024-11-05 10:35:45.287210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:19.280 #41 NEW cov: 12404 ft: 15036 corp: 34/163b lim: 10 exec/s: 41 rss: 74Mb L: 10/10 MS: 1 ShuffleBytes- 00:08:19.280 [2024-11-05 10:35:45.346944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a7b cdw11:00000000 00:08:19.280 [2024-11-05 10:35:45.346973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.280 [2024-11-05 10:35:45.347028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a61 cdw11:00000000 00:08:19.280 [2024-11-05 10:35:45.347042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.280 [2024-11-05 10:35:45.347094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00006161 cdw11:00000000 00:08:19.280 [2024-11-05 10:35:45.347107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.280 [2024-11-05 10:35:45.347158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00006161 cdw11:00000000 00:08:19.280 [2024-11-05 10:35:45.347172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.539 #42 NEW cov: 12404 ft: 15064 corp: 35/171b lim: 10 exec/s: 42 rss: 75Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:08:19.539 [2024-11-05 10:35:45.406923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00006029 cdw11:00000000 00:08:19.539 [2024-11-05 10:35:45.406947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.539 #43 NEW cov: 12404 ft: 15109 corp: 36/173b lim: 10 exec/s: 21 rss: 75Mb L: 2/10 MS: 1 ShuffleBytes- 00:08:19.539 #43 DONE cov: 12404 ft: 15109 corp: 36/173b lim: 10 exec/s: 21 rss: 75Mb 00:08:19.539 ###### Recommended dictionary. ###### 00:08:19.539 "\377\377\377\377\377\377\377(" # Uses: 1 00:08:19.539 "\377\377\377\377\377\377\377\377" # Uses: 0 00:08:19.539 ###### End of recommended dictionary. ###### 00:08:19.539 Done 43 runs in 2 second(s) 00:08:19.539 10:35:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:08:19.539 10:35:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:19.539 10:35:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:19.540 10:35:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:08:19.540 10:35:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:08:19.540 10:35:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:19.540 10:35:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:19.540 10:35:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:08:19.540 10:35:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:08:19.540 10:35:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:19.540 10:35:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:19.540 10:35:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:08:19.540 10:35:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:08:19.540 10:35:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:08:19.540 10:35:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:08:19.540 10:35:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:19.540 10:35:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:19.540 10:35:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:19.540 10:35:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:08:19.540 [2024-11-05 10:35:45.591589] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:19.540 [2024-11-05 10:35:45.591660] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2862849 ] 00:08:19.798 [2024-11-05 10:35:45.861182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.057 [2024-11-05 10:35:45.909697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.057 [2024-11-05 10:35:45.973563] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.057 [2024-11-05 10:35:45.989806] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:08:20.057 INFO: Running with entropic power schedule (0xFF, 100). 00:08:20.057 INFO: Seed: 3292961480 00:08:20.057 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:20.057 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:20.057 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:08:20.057 INFO: A corpus is not provided, starting from an empty corpus 00:08:20.057 #2 INITED exec/s: 0 rss: 66Mb 00:08:20.057 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:20.057 This may also happen if the target rejected all inputs we tried so far 00:08:20.057 [2024-11-05 10:35:46.035427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000227 cdw11:00000000 00:08:20.057 [2024-11-05 10:35:46.035456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.316 NEW_FUNC[1/714]: 0x447108 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:08:20.316 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:20.316 #6 NEW cov: 12177 ft: 12161 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 4 ChangeBit-ShuffleBytes-CopyPart-InsertByte- 00:08:20.316 [2024-11-05 10:35:46.356296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000227 cdw11:00000000 00:08:20.316 [2024-11-05 10:35:46.356332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.575 #7 NEW cov: 12290 ft: 12641 corp: 3/5b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CopyPart- 00:08:20.575 [2024-11-05 10:35:46.416354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f6d8 cdw11:00000000 00:08:20.575 [2024-11-05 10:35:46.416381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.575 #8 NEW cov: 12296 ft: 12882 corp: 4/7b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ChangeBinInt- 00:08:20.575 [2024-11-05 10:35:46.456590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000202 cdw11:00000000 00:08:20.575 [2024-11-05 10:35:46.456615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.575 [2024-11-05 10:35:46.456685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002727 cdw11:00000000 00:08:20.575 [2024-11-05 10:35:46.456700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.575 #9 NEW cov: 12390 ft: 13391 corp: 5/11b lim: 10 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 CrossOver- 00:08:20.575 [2024-11-05 10:35:46.516618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000970a cdw11:00000000 00:08:20.575 [2024-11-05 10:35:46.516644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.575 #11 NEW cov: 12390 ft: 13532 corp: 6/13b lim: 10 exec/s: 0 rss: 73Mb L: 2/4 MS: 2 ShuffleBytes-InsertByte- 00:08:20.575 [2024-11-05 10:35:46.557238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000003a cdw11:00000000 00:08:20.575 [2024-11-05 10:35:46.557263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.575 [2024-11-05 10:35:46.557331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000071bf cdw11:00000000 00:08:20.575 [2024-11-05 10:35:46.557346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.575 [2024-11-05 10:35:46.557398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000d9ee cdw11:00000000 00:08:20.575 [2024-11-05 10:35:46.557412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:20.575 [2024-11-05 10:35:46.557465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000531c cdw11:00000000 00:08:20.575 [2024-11-05 10:35:46.557480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:20.575 [2024-11-05 10:35:46.557531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000970a cdw11:00000000 00:08:20.575 [2024-11-05 10:35:46.557545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:20.575 #12 NEW cov: 12390 ft: 13967 corp: 7/23b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 CMP- DE: "\000:q\277\331\356S\034"- 00:08:20.575 [2024-11-05 10:35:46.616942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000020a cdw11:00000000 00:08:20.575 [2024-11-05 10:35:46.616968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.575 #13 NEW cov: 12390 ft: 14007 corp: 8/25b lim: 10 exec/s: 0 rss: 73Mb L: 2/10 MS: 1 CrossOver- 00:08:20.834 [2024-11-05 10:35:46.657399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000002d4 cdw11:00000000 00:08:20.834 [2024-11-05 10:35:46.657425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.834 [2024-11-05 10:35:46.657478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d4d4 cdw11:00000000 00:08:20.834 [2024-11-05 10:35:46.657493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.834 [2024-11-05 10:35:46.657544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000d4d4 cdw11:00000000 00:08:20.834 [2024-11-05 10:35:46.657558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:20.834 [2024-11-05 10:35:46.657608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000d4d4 cdw11:00000000 00:08:20.834 [2024-11-05 10:35:46.657622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:20.834 #14 NEW cov: 12390 ft: 14062 corp: 9/34b lim: 10 exec/s: 0 rss: 73Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:08:20.834 [2024-11-05 10:35:46.697658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000227 cdw11:00000000 00:08:20.834 [2024-11-05 10:35:46.697683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.834 [2024-11-05 10:35:46.697771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000011af cdw11:00000000 00:08:20.834 [2024-11-05 10:35:46.697787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.834 [2024-11-05 10:35:46.697841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000170d cdw11:00000000 00:08:20.834 [2024-11-05 10:35:46.697855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:20.834 [2024-11-05 10:35:46.697906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000c071 cdw11:00000000 00:08:20.834 [2024-11-05 10:35:46.697921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:20.834 [2024-11-05 10:35:46.697972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00003a00 cdw11:00000000 00:08:20.834 [2024-11-05 10:35:46.697986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:20.834 #15 NEW cov: 12390 ft: 14151 corp: 10/44b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 CMP- DE: "\021\257\027\015\300q:\000"- 00:08:20.834 [2024-11-05 10:35:46.737815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000003a cdw11:00000000 00:08:20.834 [2024-11-05 10:35:46.737841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.834 [2024-11-05 10:35:46.737895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000071bf cdw11:00000000 00:08:20.834 [2024-11-05 10:35:46.737909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.834 [2024-11-05 10:35:46.737961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 00:08:20.834 [2024-11-05 10:35:46.737975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:20.834 [2024-11-05 10:35:46.738025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000531c cdw11:00000000 00:08:20.834 [2024-11-05 10:35:46.738039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:20.834 [2024-11-05 10:35:46.738090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000970a cdw11:00000000 00:08:20.834 [2024-11-05 10:35:46.738104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:20.834 #16 NEW cov: 12390 ft: 14196 corp: 11/54b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 ChangeBinInt- 00:08:20.834 [2024-11-05 10:35:46.797596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f6d8 cdw11:00000000 00:08:20.834 [2024-11-05 10:35:46.797622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.834 [2024-11-05 10:35:46.797674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.834 [2024-11-05 10:35:46.797688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.834 #17 NEW cov: 12390 ft: 14240 corp: 12/59b lim: 10 exec/s: 0 rss: 73Mb L: 5/10 MS: 1 InsertRepeatedBytes- 00:08:20.834 [2024-11-05 10:35:46.857808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000202 cdw11:00000000 00:08:20.834 [2024-11-05 10:35:46.857834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.834 [2024-11-05 10:35:46.857886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002727 cdw11:00000000 00:08:20.834 [2024-11-05 10:35:46.857900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.834 #18 NEW cov: 12390 ft: 14247 corp: 13/64b lim: 10 exec/s: 0 rss: 73Mb L: 5/10 MS: 1 InsertByte- 00:08:21.093 [2024-11-05 10:35:46.917964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000202 cdw11:00000000 00:08:21.093 [2024-11-05 10:35:46.917990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.093 [2024-11-05 10:35:46.918043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002727 cdw11:00000000 00:08:21.093 [2024-11-05 10:35:46.918058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.093 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:21.093 #19 NEW cov: 12413 ft: 14276 corp: 14/68b lim: 10 exec/s: 0 rss: 74Mb L: 4/10 MS: 1 ShuffleBytes- 00:08:21.093 [2024-11-05 10:35:46.958031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001e02 cdw11:00000000 00:08:21.093 [2024-11-05 10:35:46.958056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.093 [2024-11-05 10:35:46.958109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000227 cdw11:00000000 00:08:21.093 [2024-11-05 10:35:46.958123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.093 #20 NEW cov: 12413 ft: 14285 corp: 15/73b lim: 10 exec/s: 0 rss: 74Mb L: 5/10 MS: 1 InsertByte- 00:08:21.093 [2024-11-05 10:35:46.998014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000227 cdw11:00000000 00:08:21.093 [2024-11-05 10:35:46.998039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.093 #21 NEW cov: 12413 ft: 14293 corp: 16/75b lim: 10 exec/s: 0 rss: 74Mb L: 2/10 MS: 1 ShuffleBytes- 00:08:21.094 [2024-11-05 10:35:47.038121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000023f cdw11:00000000 00:08:21.094 [2024-11-05 10:35:47.038146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.094 #22 NEW cov: 12413 ft: 14316 corp: 17/77b lim: 10 exec/s: 22 rss: 74Mb L: 2/10 MS: 1 ChangeByte- 00:08:21.094 [2024-11-05 10:35:47.078646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000002d4 cdw11:00000000 00:08:21.094 [2024-11-05 10:35:47.078671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.094 [2024-11-05 10:35:47.078739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d4d4 cdw11:00000000 00:08:21.094 [2024-11-05 10:35:47.078756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.094 [2024-11-05 10:35:47.078809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00007cd4 cdw11:00000000 00:08:21.094 [2024-11-05 10:35:47.078822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.094 [2024-11-05 10:35:47.078874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000d4d4 cdw11:00000000 00:08:21.094 [2024-11-05 10:35:47.078888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.094 #23 NEW cov: 12413 ft: 14347 corp: 18/86b lim: 10 exec/s: 23 rss: 74Mb L: 9/10 MS: 1 ChangeByte- 00:08:21.094 [2024-11-05 10:35:47.138566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001e02 cdw11:00000000 00:08:21.094 [2024-11-05 10:35:47.138592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.094 [2024-11-05 10:35:47.138663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000202 cdw11:00000000 00:08:21.094 [2024-11-05 10:35:47.138678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.353 #24 NEW cov: 12413 ft: 14404 corp: 19/91b lim: 10 exec/s: 24 rss: 74Mb L: 5/10 MS: 1 CrossOver- 00:08:21.353 [2024-11-05 10:35:47.199147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000002d4 cdw11:00000000 00:08:21.353 [2024-11-05 10:35:47.199172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.353 [2024-11-05 10:35:47.199241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d4d4 cdw11:00000000 00:08:21.353 [2024-11-05 10:35:47.199256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.353 [2024-11-05 10:35:47.199305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000d4d4 cdw11:00000000 00:08:21.353 [2024-11-05 10:35:47.199319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.353 [2024-11-05 10:35:47.199370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000d4d4 cdw11:00000000 00:08:21.353 [2024-11-05 10:35:47.199383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.353 [2024-11-05 10:35:47.199435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00002727 cdw11:00000000 00:08:21.353 [2024-11-05 10:35:47.199450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:21.353 #25 NEW cov: 12413 ft: 14425 corp: 20/101b lim: 10 exec/s: 25 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:08:21.353 [2024-11-05 10:35:47.238859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f6d8 cdw11:00000000 00:08:21.353 [2024-11-05 10:35:47.238885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.353 [2024-11-05 10:35:47.238956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000d8 cdw11:00000000 00:08:21.353 [2024-11-05 10:35:47.238971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.353 #26 NEW cov: 12413 ft: 14466 corp: 21/106b lim: 10 exec/s: 26 rss: 74Mb L: 5/10 MS: 1 CopyPart- 00:08:21.353 [2024-11-05 10:35:47.299300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000002d4 cdw11:00000000 00:08:21.353 [2024-11-05 10:35:47.299327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.353 [2024-11-05 10:35:47.299381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d4d4 cdw11:00000000 00:08:21.353 [2024-11-05 10:35:47.299396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.353 [2024-11-05 10:35:47.299447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000d4d4 cdw11:00000000 00:08:21.353 [2024-11-05 10:35:47.299460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.353 [2024-11-05 10:35:47.299512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000d4d4 cdw11:00000000 00:08:21.353 [2024-11-05 10:35:47.299526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.353 #27 NEW cov: 12413 ft: 14478 corp: 22/115b lim: 10 exec/s: 27 rss: 74Mb L: 9/10 MS: 1 ChangeBinInt- 00:08:21.353 [2024-11-05 10:35:47.339161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000093d8 cdw11:00000000 00:08:21.353 [2024-11-05 10:35:47.339186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.353 [2024-11-05 10:35:47.339238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.353 [2024-11-05 10:35:47.339252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.353 #28 NEW cov: 12413 ft: 14559 corp: 23/120b lim: 10 exec/s: 28 rss: 74Mb L: 5/10 MS: 1 ChangeByte- 00:08:21.354 [2024-11-05 10:35:47.379112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000427 cdw11:00000000 00:08:21.354 [2024-11-05 10:35:47.379136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.354 #29 NEW cov: 12413 ft: 14574 corp: 24/122b lim: 10 exec/s: 29 rss: 74Mb L: 2/10 MS: 1 ChangeBinInt- 00:08:21.354 [2024-11-05 10:35:47.419253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000274 cdw11:00000000 00:08:21.354 [2024-11-05 10:35:47.419278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.613 #30 NEW cov: 12413 ft: 14585 corp: 25/125b lim: 10 exec/s: 30 rss: 74Mb L: 3/10 MS: 1 InsertByte- 00:08:21.613 [2024-11-05 10:35:47.459329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000002c5 cdw11:00000000 00:08:21.613 [2024-11-05 10:35:47.459353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.613 #31 NEW cov: 12413 ft: 14643 corp: 26/127b lim: 10 exec/s: 31 rss: 74Mb L: 2/10 MS: 1 ChangeByte- 00:08:21.613 [2024-11-05 10:35:47.519527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff27 cdw11:00000000 00:08:21.613 [2024-11-05 10:35:47.519552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.613 #32 NEW cov: 12413 ft: 14650 corp: 27/129b lim: 10 exec/s: 32 rss: 74Mb L: 2/10 MS: 1 ChangeByte- 00:08:21.613 [2024-11-05 10:35:47.559633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000410a cdw11:00000000 00:08:21.613 [2024-11-05 10:35:47.559659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.613 #33 NEW cov: 12413 ft: 14652 corp: 28/131b lim: 10 exec/s: 33 rss: 74Mb L: 2/10 MS: 1 ChangeByte- 00:08:21.613 [2024-11-05 10:35:47.600035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d4d4 cdw11:00000000 00:08:21.613 [2024-11-05 10:35:47.600061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.613 [2024-11-05 10:35:47.600115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d4d4 cdw11:00000000 00:08:21.613 [2024-11-05 10:35:47.600130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.613 [2024-11-05 10:35:47.600182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000d427 cdw11:00000000 00:08:21.613 [2024-11-05 10:35:47.600197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.613 #34 NEW cov: 12413 ft: 14814 corp: 29/137b lim: 10 exec/s: 34 rss: 74Mb L: 6/10 MS: 1 EraseBytes- 00:08:21.613 [2024-11-05 10:35:47.640397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000003a cdw11:00000000 00:08:21.613 [2024-11-05 10:35:47.640422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.613 [2024-11-05 10:35:47.640494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000011af cdw11:00000000 00:08:21.613 [2024-11-05 10:35:47.640509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.613 [2024-11-05 10:35:47.640560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000170d cdw11:00000000 00:08:21.613 [2024-11-05 10:35:47.640574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.613 [2024-11-05 10:35:47.640624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000c071 cdw11:00000000 00:08:21.613 [2024-11-05 10:35:47.640638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.613 [2024-11-05 10:35:47.640689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00003a00 cdw11:00000000 00:08:21.613 [2024-11-05 10:35:47.640702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:21.613 #35 NEW cov: 12413 ft: 14822 corp: 30/147b lim: 10 exec/s: 35 rss: 74Mb L: 10/10 MS: 1 PersAutoDict- DE: "\021\257\027\015\300q:\000"- 00:08:21.872 [2024-11-05 10:35:47.700463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000002d4 cdw11:00000000 00:08:21.872 [2024-11-05 10:35:47.700489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.872 [2024-11-05 10:35:47.700541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d4d4 cdw11:00000000 00:08:21.872 [2024-11-05 10:35:47.700555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.872 [2024-11-05 10:35:47.700605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000d4d4 cdw11:00000000 00:08:21.872 [2024-11-05 10:35:47.700619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.872 [2024-11-05 10:35:47.700671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000d409 cdw11:00000000 00:08:21.872 [2024-11-05 10:35:47.700685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.872 #36 NEW cov: 12413 ft: 14840 corp: 31/155b lim: 10 exec/s: 36 rss: 74Mb L: 8/10 MS: 1 EraseBytes- 00:08:21.872 [2024-11-05 10:35:47.760810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000227 cdw11:00000000 00:08:21.872 [2024-11-05 10:35:47.760836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.872 [2024-11-05 10:35:47.760890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000011af cdw11:00000000 00:08:21.872 [2024-11-05 10:35:47.760904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.872 [2024-11-05 10:35:47.760971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000170d cdw11:00000000 00:08:21.872 [2024-11-05 10:35:47.760985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.872 [2024-11-05 10:35:47.761037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000c071 cdw11:00000000 00:08:21.872 [2024-11-05 10:35:47.761051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.872 [2024-11-05 10:35:47.761102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00003a00 cdw11:00000000 00:08:21.872 [2024-11-05 10:35:47.761118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:21.872 #37 NEW cov: 12413 ft: 14845 corp: 32/165b lim: 10 exec/s: 37 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:08:21.872 [2024-11-05 10:35:47.820953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000002d4 cdw11:00000000 00:08:21.872 [2024-11-05 10:35:47.820979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.872 [2024-11-05 10:35:47.821032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d402 cdw11:00000000 00:08:21.872 [2024-11-05 10:35:47.821046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.872 [2024-11-05 10:35:47.821098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000027d4 cdw11:00000000 00:08:21.872 [2024-11-05 10:35:47.821112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.872 [2024-11-05 10:35:47.821163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000d4d4 cdw11:00000000 00:08:21.872 [2024-11-05 10:35:47.821176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.872 [2024-11-05 10:35:47.821227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000d4d4 cdw11:00000000 00:08:21.872 [2024-11-05 10:35:47.821240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:21.872 #38 NEW cov: 12413 ft: 14853 corp: 33/175b lim: 10 exec/s: 38 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:08:21.873 [2024-11-05 10:35:47.860724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009797 cdw11:00000000 00:08:21.873 [2024-11-05 10:35:47.860750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.873 [2024-11-05 10:35:47.860802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00009702 cdw11:00000000 00:08:21.873 [2024-11-05 10:35:47.860817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.873 [2024-11-05 10:35:47.860868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000227 cdw11:00000000 00:08:21.873 [2024-11-05 10:35:47.860882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.873 #39 NEW cov: 12413 ft: 14859 corp: 34/182b lim: 10 exec/s: 39 rss: 74Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:08:21.873 [2024-11-05 10:35:47.920818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ce02 cdw11:00000000 00:08:21.873 [2024-11-05 10:35:47.920850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.873 [2024-11-05 10:35:47.920904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00007427 cdw11:00000000 00:08:21.873 [2024-11-05 10:35:47.920918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.132 #40 NEW cov: 12413 ft: 14870 corp: 35/186b lim: 10 exec/s: 40 rss: 74Mb L: 4/10 MS: 1 InsertByte- 00:08:22.132 [2024-11-05 10:35:47.981398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000003a cdw11:00000000 00:08:22.132 [2024-11-05 10:35:47.981423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.132 [2024-11-05 10:35:47.981493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000071d4 cdw11:00000000 00:08:22.132 [2024-11-05 10:35:47.981514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.132 [2024-11-05 10:35:47.981565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002727 cdw11:00000000 00:08:22.132 [2024-11-05 10:35:47.981579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:22.132 [2024-11-05 10:35:47.981631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000531c cdw11:00000000 00:08:22.132 [2024-11-05 10:35:47.981645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:22.132 [2024-11-05 10:35:47.981698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000970a cdw11:00000000 00:08:22.132 [2024-11-05 10:35:47.981716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:22.132 #41 NEW cov: 12413 ft: 14873 corp: 36/196b lim: 10 exec/s: 41 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:08:22.132 [2024-11-05 10:35:48.021508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000227 cdw11:00000000 00:08:22.132 [2024-11-05 10:35:48.021536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.132 [2024-11-05 10:35:48.021589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000011af cdw11:00000000 00:08:22.132 [2024-11-05 10:35:48.021604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.132 [2024-11-05 10:35:48.021658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000170d cdw11:00000000 00:08:22.132 [2024-11-05 10:35:48.021672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:22.132 [2024-11-05 10:35:48.021724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000c097 cdw11:00000000 00:08:22.132 [2024-11-05 10:35:48.021738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:22.132 [2024-11-05 10:35:48.021793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00009700 cdw11:00000000 00:08:22.132 [2024-11-05 10:35:48.021807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:22.132 #42 NEW cov: 12413 ft: 14886 corp: 37/206b lim: 10 exec/s: 21 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:08:22.132 #42 DONE cov: 12413 ft: 14886 corp: 37/206b lim: 10 exec/s: 21 rss: 74Mb 00:08:22.132 ###### Recommended dictionary. ###### 00:08:22.132 "\000:q\277\331\356S\034" # Uses: 0 00:08:22.132 "\021\257\027\015\300q:\000" # Uses: 1 00:08:22.132 ###### End of recommended dictionary. ###### 00:08:22.132 Done 42 runs in 2 second(s) 00:08:22.132 10:35:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:08:22.132 10:35:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:22.132 10:35:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:22.132 10:35:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:08:22.132 10:35:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:08:22.132 10:35:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:22.132 10:35:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:22.132 10:35:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:22.132 10:35:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:08:22.132 10:35:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:22.132 10:35:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:22.132 10:35:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:08:22.132 10:35:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:08:22.132 10:35:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:22.132 10:35:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:08:22.132 10:35:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:22.132 10:35:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:22.132 10:35:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:22.132 10:35:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:08:22.390 [2024-11-05 10:35:48.214146] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:22.390 [2024-11-05 10:35:48.214218] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2863203 ] 00:08:22.648 [2024-11-05 10:35:48.484432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.648 [2024-11-05 10:35:48.532861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.648 [2024-11-05 10:35:48.596784] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.648 [2024-11-05 10:35:48.613024] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:08:22.648 INFO: Running with entropic power schedule (0xFF, 100). 00:08:22.648 INFO: Seed: 1621994278 00:08:22.648 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:22.648 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:22.648 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:22.648 INFO: A corpus is not provided, starting from an empty corpus 00:08:22.648 [2024-11-05 10:35:48.658728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.648 [2024-11-05 10:35:48.658756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.648 #2 INITED cov: 12198 ft: 12160 corp: 1/1b exec/s: 0 rss: 72Mb 00:08:22.648 [2024-11-05 10:35:48.698732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.648 [2024-11-05 10:35:48.698758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.164 NEW_FUNC[1/1]: 0x1a319e8 in nvme_tcp_ctrlr_connect_qpair_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_tcp.c:2299 00:08:23.164 #3 NEW cov: 12318 ft: 12671 corp: 2/2b lim: 5 exec/s: 0 rss: 73Mb L: 1/1 MS: 1 ShuffleBytes- 00:08:23.164 [2024-11-05 10:35:49.160070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.164 [2024-11-05 10:35:49.160107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.164 #4 NEW cov: 12324 ft: 12912 corp: 3/3b lim: 5 exec/s: 0 rss: 73Mb L: 1/1 MS: 1 ChangeByte- 00:08:23.164 [2024-11-05 10:35:49.200064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.164 [2024-11-05 10:35:49.200096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.164 #5 NEW cov: 12409 ft: 13262 corp: 4/4b lim: 5 exec/s: 0 rss: 73Mb L: 1/1 MS: 1 ChangeByte- 00:08:23.423 [2024-11-05 10:35:49.260996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.423 [2024-11-05 10:35:49.261023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.423 [2024-11-05 10:35:49.261098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.423 [2024-11-05 10:35:49.261113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.423 [2024-11-05 10:35:49.261172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.423 [2024-11-05 10:35:49.261186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:23.423 [2024-11-05 10:35:49.261246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.423 [2024-11-05 10:35:49.261260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:23.423 [2024-11-05 10:35:49.261316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.423 [2024-11-05 10:35:49.261330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:23.423 #6 NEW cov: 12409 ft: 14244 corp: 5/9b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:08:23.423 [2024-11-05 10:35:49.300356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.423 [2024-11-05 10:35:49.300382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.423 #7 NEW cov: 12409 ft: 14353 corp: 6/10b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:08:23.423 [2024-11-05 10:35:49.340477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.423 [2024-11-05 10:35:49.340503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.423 #8 NEW cov: 12409 ft: 14394 corp: 7/11b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:08:23.423 [2024-11-05 10:35:49.400643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.423 [2024-11-05 10:35:49.400668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.423 #9 NEW cov: 12409 ft: 14463 corp: 8/12b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:08:23.423 [2024-11-05 10:35:49.440783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.423 [2024-11-05 10:35:49.440808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.423 #10 NEW cov: 12409 ft: 14486 corp: 9/13b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:08:23.423 [2024-11-05 10:35:49.481086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.423 [2024-11-05 10:35:49.481115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.423 [2024-11-05 10:35:49.481172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.423 [2024-11-05 10:35:49.481188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.681 #11 NEW cov: 12409 ft: 14749 corp: 10/15b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 CopyPart- 00:08:23.681 [2024-11-05 10:35:49.541231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.681 [2024-11-05 10:35:49.541256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.681 [2024-11-05 10:35:49.541313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.681 [2024-11-05 10:35:49.541328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.681 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:23.681 #12 NEW cov: 12432 ft: 14788 corp: 11/17b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 CrossOver- 00:08:23.681 [2024-11-05 10:35:49.581179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.681 [2024-11-05 10:35:49.581204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.681 #13 NEW cov: 12432 ft: 14815 corp: 12/18b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:08:23.681 [2024-11-05 10:35:49.641334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.681 [2024-11-05 10:35:49.641359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.681 #14 NEW cov: 12432 ft: 14840 corp: 13/19b lim: 5 exec/s: 14 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:08:23.681 [2024-11-05 10:35:49.681853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.681 [2024-11-05 10:35:49.681878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.681 [2024-11-05 10:35:49.681935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.681 [2024-11-05 10:35:49.681950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.681 [2024-11-05 10:35:49.682003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.681 [2024-11-05 10:35:49.682018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:23.681 #15 NEW cov: 12432 ft: 15010 corp: 14/22b lim: 5 exec/s: 15 rss: 73Mb L: 3/5 MS: 1 CrossOver- 00:08:23.681 [2024-11-05 10:35:49.721962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.681 [2024-11-05 10:35:49.721987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.681 [2024-11-05 10:35:49.722045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.681 [2024-11-05 10:35:49.722063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.681 [2024-11-05 10:35:49.722136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.681 [2024-11-05 10:35:49.722151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:23.939 #16 NEW cov: 12432 ft: 15054 corp: 15/25b lim: 5 exec/s: 16 rss: 73Mb L: 3/5 MS: 1 ShuffleBytes- 00:08:23.939 [2024-11-05 10:35:49.781727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.939 [2024-11-05 10:35:49.781752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.939 #17 NEW cov: 12432 ft: 15084 corp: 16/26b lim: 5 exec/s: 17 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:08:23.939 [2024-11-05 10:35:49.822248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.939 [2024-11-05 10:35:49.822273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.939 [2024-11-05 10:35:49.822350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.939 [2024-11-05 10:35:49.822365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.939 [2024-11-05 10:35:49.822422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.939 [2024-11-05 10:35:49.822436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:23.939 #18 NEW cov: 12432 ft: 15159 corp: 17/29b lim: 5 exec/s: 18 rss: 73Mb L: 3/5 MS: 1 ShuffleBytes- 00:08:23.939 [2024-11-05 10:35:49.862009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.939 [2024-11-05 10:35:49.862034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.939 #19 NEW cov: 12432 ft: 15167 corp: 18/30b lim: 5 exec/s: 19 rss: 74Mb L: 1/5 MS: 1 CopyPart- 00:08:23.939 [2024-11-05 10:35:49.922177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.939 [2024-11-05 10:35:49.922202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.939 #20 NEW cov: 12432 ft: 15177 corp: 19/31b lim: 5 exec/s: 20 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:08:23.939 [2024-11-05 10:35:49.962679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.939 [2024-11-05 10:35:49.962704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.939 [2024-11-05 10:35:49.962786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.939 [2024-11-05 10:35:49.962802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.939 [2024-11-05 10:35:49.962859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.939 [2024-11-05 10:35:49.962877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:23.939 #21 NEW cov: 12432 ft: 15192 corp: 20/34b lim: 5 exec/s: 21 rss: 74Mb L: 3/5 MS: 1 ChangeBit- 00:08:24.198 [2024-11-05 10:35:50.022559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.198 [2024-11-05 10:35:50.022636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.198 [2024-11-05 10:35:50.022766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.198 [2024-11-05 10:35:50.022833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.198 [2024-11-05 10:35:50.022933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.198 [2024-11-05 10:35:50.022955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.198 #22 NEW cov: 12432 ft: 15392 corp: 21/37b lim: 5 exec/s: 22 rss: 74Mb L: 3/5 MS: 1 ChangeBit- 00:08:24.198 [2024-11-05 10:35:50.083020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.198 [2024-11-05 10:35:50.083055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.198 [2024-11-05 10:35:50.083114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.198 [2024-11-05 10:35:50.083130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.198 [2024-11-05 10:35:50.083188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.198 [2024-11-05 10:35:50.083203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.198 #23 NEW cov: 12432 ft: 15416 corp: 22/40b lim: 5 exec/s: 23 rss: 74Mb L: 3/5 MS: 1 InsertByte- 00:08:24.198 [2024-11-05 10:35:50.142787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.198 [2024-11-05 10:35:50.142814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.198 #24 NEW cov: 12432 ft: 15423 corp: 23/41b lim: 5 exec/s: 24 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:08:24.198 [2024-11-05 10:35:50.182913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.198 [2024-11-05 10:35:50.182939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.198 #25 NEW cov: 12432 ft: 15462 corp: 24/42b lim: 5 exec/s: 25 rss: 74Mb L: 1/5 MS: 1 CrossOver- 00:08:24.198 [2024-11-05 10:35:50.243251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.198 [2024-11-05 10:35:50.243276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.198 [2024-11-05 10:35:50.243334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.198 [2024-11-05 10:35:50.243349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.457 #26 NEW cov: 12432 ft: 15470 corp: 25/44b lim: 5 exec/s: 26 rss: 74Mb L: 2/5 MS: 1 EraseBytes- 00:08:24.457 [2024-11-05 10:35:50.303398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.457 [2024-11-05 10:35:50.303423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.457 [2024-11-05 10:35:50.303484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.457 [2024-11-05 10:35:50.303499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.457 #27 NEW cov: 12432 ft: 15487 corp: 26/46b lim: 5 exec/s: 27 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:08:24.457 [2024-11-05 10:35:50.363619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.457 [2024-11-05 10:35:50.363647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.457 [2024-11-05 10:35:50.363706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.457 [2024-11-05 10:35:50.363725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.457 #28 NEW cov: 12432 ft: 15511 corp: 27/48b lim: 5 exec/s: 28 rss: 74Mb L: 2/5 MS: 1 CopyPart- 00:08:24.457 [2024-11-05 10:35:50.423916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.457 [2024-11-05 10:35:50.423942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.457 [2024-11-05 10:35:50.424004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.457 [2024-11-05 10:35:50.424019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.457 [2024-11-05 10:35:50.424078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.457 [2024-11-05 10:35:50.424092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.457 #29 NEW cov: 12432 ft: 15620 corp: 28/51b lim: 5 exec/s: 29 rss: 74Mb L: 3/5 MS: 1 ChangeBit- 00:08:24.457 [2024-11-05 10:35:50.493956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.457 [2024-11-05 10:35:50.493983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.457 [2024-11-05 10:35:50.494057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.457 [2024-11-05 10:35:50.494072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.716 #30 NEW cov: 12432 ft: 15658 corp: 29/53b lim: 5 exec/s: 30 rss: 74Mb L: 2/5 MS: 1 EraseBytes- 00:08:24.716 [2024-11-05 10:35:50.554282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.716 [2024-11-05 10:35:50.554309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.716 [2024-11-05 10:35:50.554370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.716 [2024-11-05 10:35:50.554384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.716 [2024-11-05 10:35:50.554455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.716 [2024-11-05 10:35:50.554471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.716 #31 NEW cov: 12432 ft: 15679 corp: 30/56b lim: 5 exec/s: 31 rss: 74Mb L: 3/5 MS: 1 CopyPart- 00:08:24.716 [2024-11-05 10:35:50.614293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.716 [2024-11-05 10:35:50.614319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.716 [2024-11-05 10:35:50.614379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.716 [2024-11-05 10:35:50.614393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.716 #32 NEW cov: 12432 ft: 15696 corp: 31/58b lim: 5 exec/s: 32 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:08:24.716 [2024-11-05 10:35:50.654370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.716 [2024-11-05 10:35:50.654396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.716 [2024-11-05 10:35:50.654455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.716 [2024-11-05 10:35:50.654469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.716 #33 NEW cov: 12432 ft: 15705 corp: 32/60b lim: 5 exec/s: 16 rss: 74Mb L: 2/5 MS: 1 CopyPart- 00:08:24.716 #33 DONE cov: 12432 ft: 15705 corp: 32/60b lim: 5 exec/s: 16 rss: 74Mb 00:08:24.716 Done 33 runs in 2 second(s) 00:08:24.975 10:35:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:08:24.975 10:35:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:24.975 10:35:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:24.975 10:35:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:08:24.975 10:35:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:08:24.975 10:35:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:24.975 10:35:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:24.975 10:35:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:24.975 10:35:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:08:24.975 10:35:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:24.975 10:35:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:24.975 10:35:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:08:24.975 10:35:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:08:24.975 10:35:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:24.975 10:35:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:08:24.975 10:35:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:24.975 10:35:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:24.975 10:35:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:24.975 10:35:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:08:24.975 [2024-11-05 10:35:50.860073] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:24.975 [2024-11-05 10:35:50.860144] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2863562 ] 00:08:25.234 [2024-11-05 10:35:51.127660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.234 [2024-11-05 10:35:51.175739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.234 [2024-11-05 10:35:51.239679] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.234 [2024-11-05 10:35:51.255913] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:08:25.234 INFO: Running with entropic power schedule (0xFF, 100). 00:08:25.234 INFO: Seed: 4263992761 00:08:25.234 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:25.234 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:25.234 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:25.234 INFO: A corpus is not provided, starting from an empty corpus 00:08:25.234 [2024-11-05 10:35:51.301658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.234 [2024-11-05 10:35:51.301687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.492 #2 INITED cov: 12205 ft: 12166 corp: 1/1b exec/s: 0 rss: 71Mb 00:08:25.492 [2024-11-05 10:35:51.341627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.492 [2024-11-05 10:35:51.341653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.492 #3 NEW cov: 12318 ft: 12819 corp: 2/2b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ChangeByte- 00:08:25.492 [2024-11-05 10:35:51.401999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.492 [2024-11-05 10:35:51.402026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.492 [2024-11-05 10:35:51.402087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.492 [2024-11-05 10:35:51.402101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.492 #4 NEW cov: 12324 ft: 13736 corp: 3/4b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CrossOver- 00:08:25.492 [2024-11-05 10:35:51.441925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.492 [2024-11-05 10:35:51.441952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.492 #5 NEW cov: 12409 ft: 14028 corp: 4/5b lim: 5 exec/s: 0 rss: 72Mb L: 1/2 MS: 1 EraseBytes- 00:08:25.492 [2024-11-05 10:35:51.502267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.492 [2024-11-05 10:35:51.502300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.492 [2024-11-05 10:35:51.502377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.492 [2024-11-05 10:35:51.502392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.492 #6 NEW cov: 12409 ft: 14083 corp: 5/7b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CrossOver- 00:08:25.492 [2024-11-05 10:35:51.562471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.492 [2024-11-05 10:35:51.562497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.492 [2024-11-05 10:35:51.562573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.492 [2024-11-05 10:35:51.562588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.751 #7 NEW cov: 12409 ft: 14153 corp: 6/9b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CrossOver- 00:08:25.751 [2024-11-05 10:35:51.622822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.751 [2024-11-05 10:35:51.622849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.751 [2024-11-05 10:35:51.622910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.751 [2024-11-05 10:35:51.622925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.751 [2024-11-05 10:35:51.622985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.751 [2024-11-05 10:35:51.623000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:25.751 #8 NEW cov: 12409 ft: 14418 corp: 7/12b lim: 5 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 InsertByte- 00:08:25.751 [2024-11-05 10:35:51.662691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.751 [2024-11-05 10:35:51.662721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.751 [2024-11-05 10:35:51.662798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.751 [2024-11-05 10:35:51.662813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.751 #9 NEW cov: 12409 ft: 14472 corp: 8/14b lim: 5 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 ChangeByte- 00:08:25.751 [2024-11-05 10:35:51.702860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.751 [2024-11-05 10:35:51.702887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.751 [2024-11-05 10:35:51.702961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.751 [2024-11-05 10:35:51.702977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.751 #10 NEW cov: 12409 ft: 14597 corp: 9/16b lim: 5 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 ChangeByte- 00:08:25.751 [2024-11-05 10:35:51.743154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.751 [2024-11-05 10:35:51.743181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.751 [2024-11-05 10:35:51.743258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.751 [2024-11-05 10:35:51.743273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.751 [2024-11-05 10:35:51.743329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.751 [2024-11-05 10:35:51.743342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:25.751 #11 NEW cov: 12409 ft: 14677 corp: 10/19b lim: 5 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 InsertByte- 00:08:25.751 [2024-11-05 10:35:51.782905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.751 [2024-11-05 10:35:51.782930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.751 #12 NEW cov: 12409 ft: 14749 corp: 11/20b lim: 5 exec/s: 0 rss: 72Mb L: 1/3 MS: 1 CopyPart- 00:08:25.751 [2024-11-05 10:35:51.823229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.751 [2024-11-05 10:35:51.823254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.751 [2024-11-05 10:35:51.823313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.751 [2024-11-05 10:35:51.823328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.009 #13 NEW cov: 12409 ft: 14810 corp: 12/22b lim: 5 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 ChangeBit- 00:08:26.009 [2024-11-05 10:35:51.863135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.009 [2024-11-05 10:35:51.863161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.009 #14 NEW cov: 12409 ft: 14872 corp: 13/23b lim: 5 exec/s: 0 rss: 72Mb L: 1/3 MS: 1 ChangeBit- 00:08:26.009 [2024-11-05 10:35:51.903649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.009 [2024-11-05 10:35:51.903674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.009 [2024-11-05 10:35:51.903730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.009 [2024-11-05 10:35:51.903762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.009 [2024-11-05 10:35:51.903819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.009 [2024-11-05 10:35:51.903832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.009 #15 NEW cov: 12409 ft: 14898 corp: 14/26b lim: 5 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 CrossOver- 00:08:26.009 [2024-11-05 10:35:51.963978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.009 [2024-11-05 10:35:51.964004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.009 [2024-11-05 10:35:51.964081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.009 [2024-11-05 10:35:51.964096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.009 [2024-11-05 10:35:51.964155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.009 [2024-11-05 10:35:51.964169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.009 [2024-11-05 10:35:51.964225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.009 [2024-11-05 10:35:51.964238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:26.009 #16 NEW cov: 12409 ft: 15187 corp: 15/30b lim: 5 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 CMP- DE: "\000\000"- 00:08:26.009 [2024-11-05 10:35:52.023994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.009 [2024-11-05 10:35:52.024019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.009 [2024-11-05 10:35:52.024105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.009 [2024-11-05 10:35:52.024120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.009 [2024-11-05 10:35:52.024177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.009 [2024-11-05 10:35:52.024190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.009 #17 NEW cov: 12409 ft: 15194 corp: 16/33b lim: 5 exec/s: 0 rss: 72Mb L: 3/4 MS: 1 PersAutoDict- DE: "\000\000"- 00:08:26.009 [2024-11-05 10:35:52.064094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.009 [2024-11-05 10:35:52.064119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.009 [2024-11-05 10:35:52.064176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.009 [2024-11-05 10:35:52.064190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.009 [2024-11-05 10:35:52.064247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.009 [2024-11-05 10:35:52.064260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.268 #18 NEW cov: 12409 ft: 15221 corp: 17/36b lim: 5 exec/s: 0 rss: 72Mb L: 3/4 MS: 1 ShuffleBytes- 00:08:26.268 [2024-11-05 10:35:52.123904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.268 [2024-11-05 10:35:52.123935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.268 #19 NEW cov: 12409 ft: 15263 corp: 18/37b lim: 5 exec/s: 0 rss: 72Mb L: 1/4 MS: 1 CrossOver- 00:08:26.268 [2024-11-05 10:35:52.184065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.268 [2024-11-05 10:35:52.184091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.526 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:26.526 #20 NEW cov: 12432 ft: 15331 corp: 19/38b lim: 5 exec/s: 20 rss: 73Mb L: 1/4 MS: 1 ChangeBit- 00:08:26.526 [2024-11-05 10:35:52.505507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.526 [2024-11-05 10:35:52.505543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.526 [2024-11-05 10:35:52.505602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.526 [2024-11-05 10:35:52.505618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.526 [2024-11-05 10:35:52.505672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.526 [2024-11-05 10:35:52.505686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.526 [2024-11-05 10:35:52.505742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.526 [2024-11-05 10:35:52.505757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:26.526 #21 NEW cov: 12432 ft: 15369 corp: 20/42b lim: 5 exec/s: 21 rss: 73Mb L: 4/4 MS: 1 ChangeBinInt- 00:08:26.526 [2024-11-05 10:35:52.565510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.526 [2024-11-05 10:35:52.565537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.526 [2024-11-05 10:35:52.565612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.526 [2024-11-05 10:35:52.565626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.526 [2024-11-05 10:35:52.565683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.526 [2024-11-05 10:35:52.565697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.526 [2024-11-05 10:35:52.565754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.526 [2024-11-05 10:35:52.565769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:26.526 #22 NEW cov: 12432 ft: 15439 corp: 21/46b lim: 5 exec/s: 22 rss: 73Mb L: 4/4 MS: 1 PersAutoDict- DE: "\000\000"- 00:08:26.785 [2024-11-05 10:35:52.605471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.785 [2024-11-05 10:35:52.605496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.785 [2024-11-05 10:35:52.605556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.785 [2024-11-05 10:35:52.605570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.785 [2024-11-05 10:35:52.605626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.785 [2024-11-05 10:35:52.605640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.785 #23 NEW cov: 12432 ft: 15449 corp: 22/49b lim: 5 exec/s: 23 rss: 73Mb L: 3/4 MS: 1 ChangeBit- 00:08:26.785 [2024-11-05 10:35:52.645215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.785 [2024-11-05 10:35:52.645240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.785 #24 NEW cov: 12432 ft: 15462 corp: 23/50b lim: 5 exec/s: 24 rss: 73Mb L: 1/4 MS: 1 CopyPart- 00:08:26.785 [2024-11-05 10:35:52.685865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.785 [2024-11-05 10:35:52.685891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.785 [2024-11-05 10:35:52.685962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.785 [2024-11-05 10:35:52.685977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.785 [2024-11-05 10:35:52.686031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.785 [2024-11-05 10:35:52.686045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.785 [2024-11-05 10:35:52.686100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.785 [2024-11-05 10:35:52.686114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:26.785 #25 NEW cov: 12432 ft: 15473 corp: 24/54b lim: 5 exec/s: 25 rss: 74Mb L: 4/4 MS: 1 InsertByte- 00:08:26.785 [2024-11-05 10:35:52.746199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.785 [2024-11-05 10:35:52.746225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.785 [2024-11-05 10:35:52.746284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.785 [2024-11-05 10:35:52.746299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.785 [2024-11-05 10:35:52.746357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.785 [2024-11-05 10:35:52.746371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.785 [2024-11-05 10:35:52.746427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.785 [2024-11-05 10:35:52.746444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:26.785 [2024-11-05 10:35:52.746502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.785 [2024-11-05 10:35:52.746515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:26.785 #26 NEW cov: 12432 ft: 15599 corp: 25/59b lim: 5 exec/s: 26 rss: 74Mb L: 5/5 MS: 1 PersAutoDict- DE: "\000\000"- 00:08:26.785 [2024-11-05 10:35:52.805666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.785 [2024-11-05 10:35:52.805691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.785 #27 NEW cov: 12432 ft: 15619 corp: 26/60b lim: 5 exec/s: 27 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:08:27.043 [2024-11-05 10:35:52.866397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.043 [2024-11-05 10:35:52.866425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.044 [2024-11-05 10:35:52.866481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.044 [2024-11-05 10:35:52.866496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.044 [2024-11-05 10:35:52.866550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.044 [2024-11-05 10:35:52.866564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:27.044 [2024-11-05 10:35:52.866619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.044 [2024-11-05 10:35:52.866633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:27.044 #28 NEW cov: 12432 ft: 15630 corp: 27/64b lim: 5 exec/s: 28 rss: 74Mb L: 4/5 MS: 1 ShuffleBytes- 00:08:27.044 [2024-11-05 10:35:52.925975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.044 [2024-11-05 10:35:52.926000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.044 #29 NEW cov: 12432 ft: 15645 corp: 28/65b lim: 5 exec/s: 29 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:08:27.044 [2024-11-05 10:35:52.986346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.044 [2024-11-05 10:35:52.986371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.044 [2024-11-05 10:35:52.986443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.044 [2024-11-05 10:35:52.986458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.044 #30 NEW cov: 12432 ft: 15650 corp: 29/67b lim: 5 exec/s: 30 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:08:27.044 [2024-11-05 10:35:53.026983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.044 [2024-11-05 10:35:53.027009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.044 [2024-11-05 10:35:53.027068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.044 [2024-11-05 10:35:53.027083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.044 [2024-11-05 10:35:53.027137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.044 [2024-11-05 10:35:53.027150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:27.044 [2024-11-05 10:35:53.027203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.044 [2024-11-05 10:35:53.027217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:27.044 [2024-11-05 10:35:53.027273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.044 [2024-11-05 10:35:53.027286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:27.044 #31 NEW cov: 12432 ft: 15677 corp: 30/72b lim: 5 exec/s: 31 rss: 74Mb L: 5/5 MS: 1 InsertByte- 00:08:27.044 [2024-11-05 10:35:53.086489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.044 [2024-11-05 10:35:53.086514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.303 #32 NEW cov: 12432 ft: 15685 corp: 31/73b lim: 5 exec/s: 32 rss: 74Mb L: 1/5 MS: 1 ChangeBit- 00:08:27.303 [2024-11-05 10:35:53.146656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.303 [2024-11-05 10:35:53.146681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.303 #33 NEW cov: 12432 ft: 15695 corp: 32/74b lim: 5 exec/s: 33 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:08:27.303 [2024-11-05 10:35:53.186913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.303 [2024-11-05 10:35:53.186938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.303 [2024-11-05 10:35:53.187013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.303 [2024-11-05 10:35:53.187028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.303 #34 NEW cov: 12432 ft: 15704 corp: 33/76b lim: 5 exec/s: 34 rss: 74Mb L: 2/5 MS: 1 CopyPart- 00:08:27.303 [2024-11-05 10:35:53.227383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.303 [2024-11-05 10:35:53.227409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.303 [2024-11-05 10:35:53.227466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.303 [2024-11-05 10:35:53.227480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.303 [2024-11-05 10:35:53.227537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.303 [2024-11-05 10:35:53.227554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:27.303 [2024-11-05 10:35:53.227610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.303 [2024-11-05 10:35:53.227624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:27.303 #35 NEW cov: 12432 ft: 15708 corp: 34/80b lim: 5 exec/s: 35 rss: 74Mb L: 4/5 MS: 1 CrossOver- 00:08:27.303 [2024-11-05 10:35:53.267509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.303 [2024-11-05 10:35:53.267537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.303 [2024-11-05 10:35:53.267597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.303 [2024-11-05 10:35:53.267612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.303 [2024-11-05 10:35:53.267669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.303 [2024-11-05 10:35:53.267684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:27.303 [2024-11-05 10:35:53.267759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.303 [2024-11-05 10:35:53.267775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:27.303 #36 NEW cov: 12432 ft: 15714 corp: 35/84b lim: 5 exec/s: 18 rss: 74Mb L: 4/5 MS: 1 ChangeBinInt- 00:08:27.303 #36 DONE cov: 12432 ft: 15714 corp: 35/84b lim: 5 exec/s: 18 rss: 74Mb 00:08:27.303 ###### Recommended dictionary. ###### 00:08:27.303 "\000\000" # Uses: 3 00:08:27.303 ###### End of recommended dictionary. ###### 00:08:27.303 Done 36 runs in 2 second(s) 00:08:27.562 10:35:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:08:27.562 10:35:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:27.562 10:35:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:27.562 10:35:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:08:27.562 10:35:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:08:27.562 10:35:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:27.562 10:35:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:27.562 10:35:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:27.562 10:35:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:08:27.562 10:35:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:27.562 10:35:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:27.562 10:35:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:08:27.562 10:35:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:08:27.562 10:35:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:27.562 10:35:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:08:27.562 10:35:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:27.562 10:35:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:27.562 10:35:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:27.562 10:35:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:08:27.562 [2024-11-05 10:35:53.459683] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:27.562 [2024-11-05 10:35:53.459761] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2863920 ] 00:08:27.820 [2024-11-05 10:35:53.729141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.820 [2024-11-05 10:35:53.777117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.820 [2024-11-05 10:35:53.841064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.820 [2024-11-05 10:35:53.857300] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:08:27.820 INFO: Running with entropic power schedule (0xFF, 100). 00:08:27.820 INFO: Seed: 2569017002 00:08:27.820 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:27.820 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:27.820 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:27.820 INFO: A corpus is not provided, starting from an empty corpus 00:08:27.820 #2 INITED exec/s: 0 rss: 66Mb 00:08:27.820 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:27.820 This may also happen if the target rejected all inputs we tried so far 00:08:28.078 [2024-11-05 10:35:53.906517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff39 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.078 [2024-11-05 10:35:53.906546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.336 NEW_FUNC[1/715]: 0x448a88 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:08:28.336 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:28.336 #11 NEW cov: 12228 ft: 12191 corp: 2/12b lim: 40 exec/s: 0 rss: 73Mb L: 11/11 MS: 4 InsertByte-ChangeByte-CrossOver-CMP- DE: "\377\377\377\377\377\377\3779"- 00:08:28.336 [2024-11-05 10:35:54.227465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff35 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.336 [2024-11-05 10:35:54.227500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.336 #12 NEW cov: 12341 ft: 12575 corp: 3/23b lim: 40 exec/s: 0 rss: 73Mb L: 11/11 MS: 1 ChangeASCIIInt- 00:08:28.336 [2024-11-05 10:35:54.287488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:28ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.336 [2024-11-05 10:35:54.287515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.336 #13 NEW cov: 12347 ft: 12903 corp: 4/35b lim: 40 exec/s: 0 rss: 73Mb L: 12/12 MS: 1 InsertByte- 00:08:28.336 [2024-11-05 10:35:54.327532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff39 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.336 [2024-11-05 10:35:54.327559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.336 #20 NEW cov: 12432 ft: 13330 corp: 5/44b lim: 40 exec/s: 0 rss: 73Mb L: 9/12 MS: 2 ShuffleBytes-PersAutoDict- DE: "\377\377\377\377\377\377\3779"- 00:08:28.336 [2024-11-05 10:35:54.367692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.337 [2024-11-05 10:35:54.367724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.337 #21 NEW cov: 12432 ft: 13411 corp: 6/55b lim: 40 exec/s: 0 rss: 73Mb L: 11/12 MS: 1 CopyPart- 00:08:28.595 [2024-11-05 10:35:54.427907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.595 [2024-11-05 10:35:54.427934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.595 #22 NEW cov: 12432 ft: 13527 corp: 7/66b lim: 40 exec/s: 0 rss: 73Mb L: 11/12 MS: 1 ShuffleBytes- 00:08:28.595 [2024-11-05 10:35:54.488473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.595 [2024-11-05 10:35:54.488499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.595 [2024-11-05 10:35:54.488581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.595 [2024-11-05 10:35:54.488596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.595 [2024-11-05 10:35:54.488658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.595 [2024-11-05 10:35:54.488672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:28.595 [2024-11-05 10:35:54.488734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.595 [2024-11-05 10:35:54.488749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:28.595 #23 NEW cov: 12432 ft: 14218 corp: 8/102b lim: 40 exec/s: 0 rss: 73Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:08:28.595 [2024-11-05 10:35:54.548181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.595 [2024-11-05 10:35:54.548208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.595 #24 NEW cov: 12432 ft: 14248 corp: 9/111b lim: 40 exec/s: 0 rss: 73Mb L: 9/36 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\3779"- 00:08:28.595 [2024-11-05 10:35:54.588358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:394fe40a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.595 [2024-11-05 10:35:54.588383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.595 #25 NEW cov: 12432 ft: 14294 corp: 10/119b lim: 40 exec/s: 0 rss: 73Mb L: 8/36 MS: 1 EraseBytes- 00:08:28.595 [2024-11-05 10:35:54.628909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.595 [2024-11-05 10:35:54.628935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.595 [2024-11-05 10:35:54.628998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:fffffcff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.595 [2024-11-05 10:35:54.629019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.595 [2024-11-05 10:35:54.629081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.595 [2024-11-05 10:35:54.629095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:28.595 [2024-11-05 10:35:54.629158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.595 [2024-11-05 10:35:54.629173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:28.595 #26 NEW cov: 12432 ft: 14321 corp: 11/155b lim: 40 exec/s: 0 rss: 73Mb L: 36/36 MS: 1 ChangeBinInt- 00:08:28.853 [2024-11-05 10:35:54.688635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.853 [2024-11-05 10:35:54.688661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.853 #27 NEW cov: 12432 ft: 14351 corp: 12/166b lim: 40 exec/s: 0 rss: 73Mb L: 11/36 MS: 1 ShuffleBytes- 00:08:28.853 [2024-11-05 10:35:54.728859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff35 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.853 [2024-11-05 10:35:54.728884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.854 [2024-11-05 10:35:54.728948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:4f000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.854 [2024-11-05 10:35:54.728962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.854 #28 NEW cov: 12432 ft: 14588 corp: 13/185b lim: 40 exec/s: 0 rss: 73Mb L: 19/36 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\003"- 00:08:28.854 [2024-11-05 10:35:54.769315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff24ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.854 [2024-11-05 10:35:54.769340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.854 [2024-11-05 10:35:54.769405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.854 [2024-11-05 10:35:54.769420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.854 [2024-11-05 10:35:54.769497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.854 [2024-11-05 10:35:54.769512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:28.854 [2024-11-05 10:35:54.769577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.854 [2024-11-05 10:35:54.769591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:28.854 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:28.854 #29 NEW cov: 12455 ft: 14654 corp: 14/221b lim: 40 exec/s: 0 rss: 73Mb L: 36/36 MS: 1 ChangeByte- 00:08:28.854 [2024-11-05 10:35:54.808983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.854 [2024-11-05 10:35:54.809011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.854 #30 NEW cov: 12455 ft: 14686 corp: 15/232b lim: 40 exec/s: 0 rss: 73Mb L: 11/36 MS: 1 ShuffleBytes- 00:08:28.854 [2024-11-05 10:35:54.849239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.854 [2024-11-05 10:35:54.849265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.854 [2024-11-05 10:35:54.849345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff394f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.854 [2024-11-05 10:35:54.849361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.854 #31 NEW cov: 12455 ft: 14699 corp: 16/250b lim: 40 exec/s: 0 rss: 73Mb L: 18/36 MS: 1 CopyPart- 00:08:28.854 [2024-11-05 10:35:54.889219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff4fe40a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.854 [2024-11-05 10:35:54.889244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.854 #32 NEW cov: 12455 ft: 14736 corp: 17/258b lim: 40 exec/s: 32 rss: 73Mb L: 8/36 MS: 1 EraseBytes- 00:08:28.854 [2024-11-05 10:35:54.929358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffff7aff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:28.854 [2024-11-05 10:35:54.929383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.112 #33 NEW cov: 12455 ft: 14787 corp: 18/270b lim: 40 exec/s: 33 rss: 74Mb L: 12/36 MS: 1 InsertByte- 00:08:29.112 [2024-11-05 10:35:54.989480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.112 [2024-11-05 10:35:54.989506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.112 #34 NEW cov: 12455 ft: 14844 corp: 19/279b lim: 40 exec/s: 34 rss: 74Mb L: 9/36 MS: 1 ChangeByte- 00:08:29.112 [2024-11-05 10:35:55.049725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:4e4fe4ff cdw11:ffff0aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.112 [2024-11-05 10:35:55.049750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.112 #39 NEW cov: 12455 ft: 14870 corp: 20/294b lim: 40 exec/s: 39 rss: 74Mb L: 15/36 MS: 5 EraseBytes-ChangeByte-EraseBytes-EraseBytes-CrossOver- 00:08:29.112 [2024-11-05 10:35:55.110331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.112 [2024-11-05 10:35:55.110357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.112 [2024-11-05 10:35:55.110423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:fffffcff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.112 [2024-11-05 10:35:55.110438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.112 [2024-11-05 10:35:55.110501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffefffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.112 [2024-11-05 10:35:55.110515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.112 [2024-11-05 10:35:55.110575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.112 [2024-11-05 10:35:55.110592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:29.112 #40 NEW cov: 12455 ft: 14916 corp: 21/330b lim: 40 exec/s: 40 rss: 74Mb L: 36/36 MS: 1 ChangeBit- 00:08:29.112 [2024-11-05 10:35:55.170033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.112 [2024-11-05 10:35:55.170059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.112 #41 NEW cov: 12455 ft: 14926 corp: 22/339b lim: 40 exec/s: 41 rss: 74Mb L: 9/36 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\003"- 00:08:29.371 [2024-11-05 10:35:55.210145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.371 [2024-11-05 10:35:55.210169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.371 #42 NEW cov: 12455 ft: 14931 corp: 23/354b lim: 40 exec/s: 42 rss: 74Mb L: 15/36 MS: 1 EraseBytes- 00:08:29.371 [2024-11-05 10:35:55.270315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff394f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.371 [2024-11-05 10:35:55.270340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.371 #43 NEW cov: 12455 ft: 14944 corp: 24/363b lim: 40 exec/s: 43 rss: 74Mb L: 9/36 MS: 1 CrossOver- 00:08:29.371 [2024-11-05 10:35:55.310473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff394f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.371 [2024-11-05 10:35:55.310498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.371 #44 NEW cov: 12455 ft: 15048 corp: 25/373b lim: 40 exec/s: 44 rss: 74Mb L: 10/36 MS: 1 InsertByte- 00:08:29.371 [2024-11-05 10:35:55.370805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.371 [2024-11-05 10:35:55.370831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.371 [2024-11-05 10:35:55.370894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.371 [2024-11-05 10:35:55.370909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.371 #45 NEW cov: 12455 ft: 15061 corp: 26/396b lim: 40 exec/s: 45 rss: 74Mb L: 23/36 MS: 1 CopyPart- 00:08:29.371 [2024-11-05 10:35:55.411222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.371 [2024-11-05 10:35:55.411248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.371 [2024-11-05 10:35:55.411311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:fffffcff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.371 [2024-11-05 10:35:55.411326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.371 [2024-11-05 10:35:55.411386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff0aefff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.371 [2024-11-05 10:35:55.411401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.371 [2024-11-05 10:35:55.411466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.371 [2024-11-05 10:35:55.411484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:29.629 #46 NEW cov: 12455 ft: 15089 corp: 27/433b lim: 40 exec/s: 46 rss: 74Mb L: 37/37 MS: 1 CopyPart- 00:08:29.629 [2024-11-05 10:35:55.471391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.629 [2024-11-05 10:35:55.471418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.629 [2024-11-05 10:35:55.471482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:fffffcff cdw11:ffff354f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.629 [2024-11-05 10:35:55.471497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.629 [2024-11-05 10:35:55.471563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:e40affff cdw11:ff0aefff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.629 [2024-11-05 10:35:55.471577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.629 [2024-11-05 10:35:55.471640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.629 [2024-11-05 10:35:55.471654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:29.629 #47 NEW cov: 12455 ft: 15114 corp: 28/470b lim: 40 exec/s: 47 rss: 74Mb L: 37/37 MS: 1 CopyPart- 00:08:29.629 [2024-11-05 10:35:55.531113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffff5dff cdw11:394f7a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.629 [2024-11-05 10:35:55.531140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.629 #49 NEW cov: 12455 ft: 15126 corp: 29/478b lim: 40 exec/s: 49 rss: 74Mb L: 8/37 MS: 2 EraseBytes-InsertByte- 00:08:29.629 [2024-11-05 10:35:55.591304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffff5dff cdw11:344f7a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.629 [2024-11-05 10:35:55.591329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.629 #50 NEW cov: 12455 ft: 15135 corp: 30/486b lim: 40 exec/s: 50 rss: 74Mb L: 8/37 MS: 1 ChangeASCIIInt- 00:08:29.629 [2024-11-05 10:35:55.651923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff35 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.629 [2024-11-05 10:35:55.651949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.629 [2024-11-05 10:35:55.652029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:4f000000 cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.629 [2024-11-05 10:35:55.652045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.629 [2024-11-05 10:35:55.652109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff354f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.629 [2024-11-05 10:35:55.652123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.629 [2024-11-05 10:35:55.652186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:000003e4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.629 [2024-11-05 10:35:55.652200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:29.629 #51 NEW cov: 12455 ft: 15144 corp: 31/519b lim: 40 exec/s: 51 rss: 74Mb L: 33/37 MS: 1 CopyPart- 00:08:29.888 [2024-11-05 10:35:55.711695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff394f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.888 [2024-11-05 10:35:55.711729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.888 #52 NEW cov: 12455 ft: 15147 corp: 32/529b lim: 40 exec/s: 52 rss: 74Mb L: 10/37 MS: 1 ChangeByte- 00:08:29.888 [2024-11-05 10:35:55.751743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:09ff394f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.888 [2024-11-05 10:35:55.751771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.888 #53 NEW cov: 12455 ft: 15148 corp: 33/538b lim: 40 exec/s: 53 rss: 74Mb L: 9/37 MS: 1 ChangeBinInt- 00:08:29.888 [2024-11-05 10:35:55.792185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff35 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.888 [2024-11-05 10:35:55.792211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.888 [2024-11-05 10:35:55.792276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:4f000000 cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.888 [2024-11-05 10:35:55.792290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.888 [2024-11-05 10:35:55.792354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff354f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.888 [2024-11-05 10:35:55.792367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.888 #54 NEW cov: 12455 ft: 15325 corp: 34/569b lim: 40 exec/s: 54 rss: 75Mb L: 31/37 MS: 1 EraseBytes- 00:08:29.888 [2024-11-05 10:35:55.852322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff24ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.888 [2024-11-05 10:35:55.852348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.888 [2024-11-05 10:35:55.852429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.888 [2024-11-05 10:35:55.852443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.888 [2024-11-05 10:35:55.852507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:354fe40a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.888 [2024-11-05 10:35:55.852523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.888 #55 NEW cov: 12455 ft: 15333 corp: 35/593b lim: 40 exec/s: 27 rss: 75Mb L: 24/37 MS: 1 EraseBytes- 00:08:29.888 #55 DONE cov: 12455 ft: 15333 corp: 35/593b lim: 40 exec/s: 27 rss: 75Mb 00:08:29.888 ###### Recommended dictionary. ###### 00:08:29.888 "\377\377\377\377\377\377\3779" # Uses: 2 00:08:29.888 "\000\000\000\000\000\000\000\003" # Uses: 1 00:08:29.888 ###### End of recommended dictionary. ###### 00:08:29.888 Done 55 runs in 2 second(s) 00:08:30.146 10:35:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:08:30.146 10:35:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:30.146 10:35:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:30.146 10:35:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:08:30.146 10:35:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:08:30.146 10:35:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:30.146 10:35:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:30.146 10:35:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:30.146 10:35:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:08:30.146 10:35:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:30.146 10:35:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:30.146 10:35:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:08:30.146 10:35:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:08:30.146 10:35:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:30.146 10:35:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:08:30.146 10:35:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:30.146 10:35:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:30.146 10:35:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:30.147 10:35:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:08:30.147 [2024-11-05 10:35:56.064623] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:30.147 [2024-11-05 10:35:56.064696] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2864271 ] 00:08:30.405 [2024-11-05 10:35:56.332142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.405 [2024-11-05 10:35:56.379602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.405 [2024-11-05 10:35:56.443819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.405 [2024-11-05 10:35:56.460053] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:08:30.405 INFO: Running with entropic power schedule (0xFF, 100). 00:08:30.405 INFO: Seed: 877066401 00:08:30.663 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:30.663 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:30.663 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:30.663 INFO: A corpus is not provided, starting from an empty corpus 00:08:30.663 #2 INITED exec/s: 0 rss: 66Mb 00:08:30.663 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:30.663 This may also happen if the target rejected all inputs we tried so far 00:08:30.663 [2024-11-05 10:35:56.509929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.663 [2024-11-05 10:35:56.509959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.663 [2024-11-05 10:35:56.510025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.663 [2024-11-05 10:35:56.510039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.663 [2024-11-05 10:35:56.510102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.663 [2024-11-05 10:35:56.510122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.921 NEW_FUNC[1/716]: 0x44a7f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:08:30.921 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:30.921 #16 NEW cov: 12236 ft: 12233 corp: 2/29b lim: 40 exec/s: 0 rss: 73Mb L: 28/28 MS: 4 CopyPart-ChangeBit-CMP-InsertRepeatedBytes- DE: "\377s"- 00:08:30.921 [2024-11-05 10:35:56.830678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.921 [2024-11-05 10:35:56.830719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.921 [2024-11-05 10:35:56.830778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.921 [2024-11-05 10:35:56.830792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.921 [2024-11-05 10:35:56.830851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.921 [2024-11-05 10:35:56.830865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.921 #22 NEW cov: 12353 ft: 12613 corp: 3/56b lim: 40 exec/s: 0 rss: 73Mb L: 27/28 MS: 1 EraseBytes- 00:08:30.921 [2024-11-05 10:35:56.890935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.921 [2024-11-05 10:35:56.890961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.921 [2024-11-05 10:35:56.891022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.921 [2024-11-05 10:35:56.891036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.921 [2024-11-05 10:35:56.891092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.921 [2024-11-05 10:35:56.891106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.921 [2024-11-05 10:35:56.891161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.921 [2024-11-05 10:35:56.891175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:30.921 #25 NEW cov: 12359 ft: 13315 corp: 4/89b lim: 40 exec/s: 0 rss: 73Mb L: 33/33 MS: 3 ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:08:30.921 [2024-11-05 10:35:56.930464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d1d1ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.921 [2024-11-05 10:35:56.930490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.921 #33 NEW cov: 12444 ft: 14266 corp: 5/99b lim: 40 exec/s: 0 rss: 73Mb L: 10/33 MS: 3 ChangeByte-CopyPart-CrossOver- 00:08:30.921 [2024-11-05 10:35:56.970613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff730a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.921 [2024-11-05 10:35:56.970638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.921 #36 NEW cov: 12444 ft: 14366 corp: 6/107b lim: 40 exec/s: 0 rss: 73Mb L: 8/33 MS: 3 CrossOver-InsertRepeatedBytes-PersAutoDict- DE: "\377s"- 00:08:31.179 [2024-11-05 10:35:57.010861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d1d1ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.179 [2024-11-05 10:35:57.010887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.179 [2024-11-05 10:35:57.010944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.179 [2024-11-05 10:35:57.010958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.179 #37 NEW cov: 12444 ft: 14667 corp: 7/123b lim: 40 exec/s: 0 rss: 73Mb L: 16/33 MS: 1 CrossOver- 00:08:31.179 [2024-11-05 10:35:57.070912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4b480000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.179 [2024-11-05 10:35:57.070940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.179 #41 NEW cov: 12444 ft: 14750 corp: 8/133b lim: 40 exec/s: 0 rss: 73Mb L: 10/33 MS: 4 ChangeBit-CopyPart-ChangeByte-CMP- DE: "H\000\000\000\000\000\000\000"- 00:08:31.179 [2024-11-05 10:35:57.110988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff730a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.179 [2024-11-05 10:35:57.111014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.179 #42 NEW cov: 12444 ft: 14810 corp: 9/141b lim: 40 exec/s: 0 rss: 73Mb L: 8/33 MS: 1 ShuffleBytes- 00:08:31.179 [2024-11-05 10:35:57.171538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.179 [2024-11-05 10:35:57.171564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.179 [2024-11-05 10:35:57.171621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.179 [2024-11-05 10:35:57.171636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.179 [2024-11-05 10:35:57.171691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.179 [2024-11-05 10:35:57.171705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:31.179 #43 NEW cov: 12444 ft: 14835 corp: 10/168b lim: 40 exec/s: 0 rss: 73Mb L: 27/33 MS: 1 ChangeBit- 00:08:31.179 [2024-11-05 10:35:57.231575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d1d1ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.179 [2024-11-05 10:35:57.231600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.179 [2024-11-05 10:35:57.231659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.179 [2024-11-05 10:35:57.231673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.438 #44 NEW cov: 12444 ft: 14906 corp: 11/184b lim: 40 exec/s: 0 rss: 74Mb L: 16/33 MS: 1 ShuffleBytes- 00:08:31.438 [2024-11-05 10:35:57.291728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d1d1ffff cdw11:feffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.438 [2024-11-05 10:35:57.291758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.438 [2024-11-05 10:35:57.291819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.438 [2024-11-05 10:35:57.291833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.438 #45 NEW cov: 12444 ft: 14925 corp: 12/200b lim: 40 exec/s: 0 rss: 74Mb L: 16/33 MS: 1 ChangeBit- 00:08:31.438 [2024-11-05 10:35:57.351729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d1d1ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.438 [2024-11-05 10:35:57.351756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.438 #46 NEW cov: 12444 ft: 14944 corp: 13/208b lim: 40 exec/s: 0 rss: 74Mb L: 8/33 MS: 1 EraseBytes- 00:08:31.438 [2024-11-05 10:35:57.391805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.438 [2024-11-05 10:35:57.391830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.438 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:31.438 #47 NEW cov: 12467 ft: 14994 corp: 14/222b lim: 40 exec/s: 0 rss: 74Mb L: 14/33 MS: 1 CrossOver- 00:08:31.438 [2024-11-05 10:35:57.432295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:86ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.438 [2024-11-05 10:35:57.432321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.438 [2024-11-05 10:35:57.432376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.438 [2024-11-05 10:35:57.432390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.438 [2024-11-05 10:35:57.432444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.438 [2024-11-05 10:35:57.432457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:31.438 #48 NEW cov: 12467 ft: 15006 corp: 15/251b lim: 40 exec/s: 0 rss: 74Mb L: 29/33 MS: 1 InsertByte- 00:08:31.438 [2024-11-05 10:35:57.472047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a8fffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.438 [2024-11-05 10:35:57.472072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.438 #53 NEW cov: 12467 ft: 15037 corp: 16/261b lim: 40 exec/s: 53 rss: 74Mb L: 10/33 MS: 5 CrossOver-CopyPart-EraseBytes-InsertByte-CMP- DE: "\377\377\377\377\377\377\377\377"- 00:08:31.438 [2024-11-05 10:35:57.512184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d1d1ffff cdw11:ffffd1d1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.438 [2024-11-05 10:35:57.512210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.696 #55 NEW cov: 12467 ft: 15080 corp: 17/275b lim: 40 exec/s: 55 rss: 74Mb L: 14/33 MS: 2 EraseBytes-CopyPart- 00:08:31.696 [2024-11-05 10:35:57.552315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff730a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.696 [2024-11-05 10:35:57.552342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.696 #57 NEW cov: 12467 ft: 15092 corp: 18/284b lim: 40 exec/s: 57 rss: 74Mb L: 9/33 MS: 2 ChangeByte-CrossOver- 00:08:31.696 [2024-11-05 10:35:57.592731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.696 [2024-11-05 10:35:57.592758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.696 [2024-11-05 10:35:57.592833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:73ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.696 [2024-11-05 10:35:57.592856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.696 [2024-11-05 10:35:57.592912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.696 [2024-11-05 10:35:57.592927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:31.696 #58 NEW cov: 12467 ft: 15129 corp: 19/314b lim: 40 exec/s: 58 rss: 74Mb L: 30/33 MS: 1 PersAutoDict- DE: "\377s"- 00:08:31.696 [2024-11-05 10:35:57.632509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.696 [2024-11-05 10:35:57.632534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.696 #59 NEW cov: 12467 ft: 15200 corp: 20/328b lim: 40 exec/s: 59 rss: 74Mb L: 14/33 MS: 1 PersAutoDict- DE: "\377s"- 00:08:31.696 [2024-11-05 10:35:57.692848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d1d1ffff cdw11:ffff5bff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.696 [2024-11-05 10:35:57.692874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.696 [2024-11-05 10:35:57.692932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.696 [2024-11-05 10:35:57.692947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.696 #60 NEW cov: 12467 ft: 15249 corp: 21/345b lim: 40 exec/s: 60 rss: 74Mb L: 17/33 MS: 1 InsertByte- 00:08:31.696 [2024-11-05 10:35:57.732796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d1d1ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.696 [2024-11-05 10:35:57.732821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.955 #61 NEW cov: 12467 ft: 15261 corp: 22/356b lim: 40 exec/s: 61 rss: 74Mb L: 11/33 MS: 1 EraseBytes- 00:08:31.955 [2024-11-05 10:35:57.793359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.955 [2024-11-05 10:35:57.793384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.955 [2024-11-05 10:35:57.793442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.955 [2024-11-05 10:35:57.793456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.955 [2024-11-05 10:35:57.793512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.955 [2024-11-05 10:35:57.793525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:31.955 #62 NEW cov: 12467 ft: 15283 corp: 23/382b lim: 40 exec/s: 62 rss: 74Mb L: 26/33 MS: 1 EraseBytes- 00:08:31.955 [2024-11-05 10:35:57.833087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d1d1ffff cdw11:ff03d1d1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.955 [2024-11-05 10:35:57.833116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.955 #63 NEW cov: 12467 ft: 15322 corp: 24/396b lim: 40 exec/s: 63 rss: 74Mb L: 14/33 MS: 1 ChangeBinInt- 00:08:31.955 [2024-11-05 10:35:57.893275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d1d1ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.955 [2024-11-05 10:35:57.893301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.955 #64 NEW cov: 12467 ft: 15335 corp: 25/410b lim: 40 exec/s: 64 rss: 74Mb L: 14/33 MS: 1 ShuffleBytes- 00:08:31.955 [2024-11-05 10:35:57.933547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d1d1ffff cdw11:ffffd1d1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.955 [2024-11-05 10:35:57.933572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.955 [2024-11-05 10:35:57.933630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.955 [2024-11-05 10:35:57.933644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.955 #65 NEW cov: 12467 ft: 15343 corp: 26/427b lim: 40 exec/s: 65 rss: 74Mb L: 17/33 MS: 1 CopyPart- 00:08:31.955 [2024-11-05 10:35:57.973514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.955 [2024-11-05 10:35:57.973539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.955 #68 NEW cov: 12467 ft: 15355 corp: 27/436b lim: 40 exec/s: 68 rss: 74Mb L: 9/33 MS: 3 CopyPart-CopyPart-CMP- DE: "\000\000\000\000\000\000\000\004"- 00:08:31.955 [2024-11-05 10:35:58.014016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.955 [2024-11-05 10:35:58.014042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.955 [2024-11-05 10:35:58.014098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.955 [2024-11-05 10:35:58.014112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.955 [2024-11-05 10:35:58.014168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:fffffeff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.955 [2024-11-05 10:35:58.014181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.218 #69 NEW cov: 12467 ft: 15424 corp: 28/463b lim: 40 exec/s: 69 rss: 74Mb L: 27/33 MS: 1 ChangeBit- 00:08:32.218 [2024-11-05 10:35:58.073990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffd1d1ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.218 [2024-11-05 10:35:58.074016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.218 [2024-11-05 10:35:58.074072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.218 [2024-11-05 10:35:58.074086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.219 #70 NEW cov: 12467 ft: 15448 corp: 29/485b lim: 40 exec/s: 70 rss: 74Mb L: 22/33 MS: 1 CrossOver- 00:08:32.219 [2024-11-05 10:35:58.134534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.219 [2024-11-05 10:35:58.134564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.219 [2024-11-05 10:35:58.134618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.219 [2024-11-05 10:35:58.134632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.219 [2024-11-05 10:35:58.134687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.219 [2024-11-05 10:35:58.134700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.219 [2024-11-05 10:35:58.134746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.219 [2024-11-05 10:35:58.134759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:32.219 #71 NEW cov: 12467 ft: 15457 corp: 30/521b lim: 40 exec/s: 71 rss: 74Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:08:32.219 [2024-11-05 10:35:58.174460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.219 [2024-11-05 10:35:58.174486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.219 [2024-11-05 10:35:58.174545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:73ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.219 [2024-11-05 10:35:58.174559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.219 [2024-11-05 10:35:58.174614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff2cff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.219 [2024-11-05 10:35:58.174628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.219 #72 NEW cov: 12467 ft: 15485 corp: 31/552b lim: 40 exec/s: 72 rss: 74Mb L: 31/36 MS: 1 InsertByte- 00:08:32.219 [2024-11-05 10:35:58.234289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:fffd730a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.219 [2024-11-05 10:35:58.234314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.219 #73 NEW cov: 12467 ft: 15497 corp: 32/561b lim: 40 exec/s: 73 rss: 74Mb L: 9/36 MS: 1 ChangeBit- 00:08:32.219 [2024-11-05 10:35:58.294482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.219 [2024-11-05 10:35:58.294507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.481 #74 NEW cov: 12467 ft: 15543 corp: 33/570b lim: 40 exec/s: 74 rss: 74Mb L: 9/36 MS: 1 ChangeByte- 00:08:32.481 [2024-11-05 10:35:58.355164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff21ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.481 [2024-11-05 10:35:58.355189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.481 [2024-11-05 10:35:58.355261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.481 [2024-11-05 10:35:58.355276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.481 [2024-11-05 10:35:58.355339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.481 [2024-11-05 10:35:58.355353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.481 [2024-11-05 10:35:58.355410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.481 [2024-11-05 10:35:58.355424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:32.481 #75 NEW cov: 12467 ft: 15558 corp: 34/606b lim: 40 exec/s: 75 rss: 75Mb L: 36/36 MS: 1 ChangeByte- 00:08:32.481 [2024-11-05 10:35:58.414809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff73c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.481 [2024-11-05 10:35:58.414836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.481 #76 NEW cov: 12467 ft: 15577 corp: 35/616b lim: 40 exec/s: 76 rss: 75Mb L: 10/36 MS: 1 InsertByte- 00:08:32.481 [2024-11-05 10:35:58.455441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.481 [2024-11-05 10:35:58.455467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.481 [2024-11-05 10:35:58.455526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.481 [2024-11-05 10:35:58.455541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.481 [2024-11-05 10:35:58.455596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffd2d2d2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.481 [2024-11-05 10:35:58.455610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.481 [2024-11-05 10:35:58.455665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.481 [2024-11-05 10:35:58.455679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:32.481 #77 NEW cov: 12467 ft: 15633 corp: 36/653b lim: 40 exec/s: 38 rss: 75Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:08:32.481 #77 DONE cov: 12467 ft: 15633 corp: 36/653b lim: 40 exec/s: 38 rss: 75Mb 00:08:32.481 ###### Recommended dictionary. ###### 00:08:32.481 "\377s" # Uses: 6 00:08:32.481 "H\000\000\000\000\000\000\000" # Uses: 0 00:08:32.481 "\377\377\377\377\377\377\377\377" # Uses: 0 00:08:32.481 "\000\000\000\000\000\000\000\004" # Uses: 0 00:08:32.481 ###### End of recommended dictionary. ###### 00:08:32.481 Done 77 runs in 2 second(s) 00:08:32.740 10:35:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:08:32.740 10:35:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:32.740 10:35:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:32.740 10:35:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:08:32.740 10:35:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:08:32.740 10:35:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:32.740 10:35:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:32.740 10:35:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:32.740 10:35:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:08:32.740 10:35:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:32.740 10:35:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:32.740 10:35:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:08:32.740 10:35:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:08:32.740 10:35:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:32.740 10:35:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:08:32.740 10:35:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:32.740 10:35:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:32.740 10:35:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:32.740 10:35:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:08:32.740 [2024-11-05 10:35:58.663735] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:32.740 [2024-11-05 10:35:58.663825] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2864578 ] 00:08:32.998 [2024-11-05 10:35:58.937704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.998 [2024-11-05 10:35:58.990967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.998 [2024-11-05 10:35:59.054833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.998 [2024-11-05 10:35:59.071074] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:08:33.257 INFO: Running with entropic power schedule (0xFF, 100). 00:08:33.257 INFO: Seed: 3488054242 00:08:33.257 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:33.257 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:33.257 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:33.257 INFO: A corpus is not provided, starting from an empty corpus 00:08:33.257 #2 INITED exec/s: 0 rss: 66Mb 00:08:33.257 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:33.257 This may also happen if the target rejected all inputs we tried so far 00:08:33.257 [2024-11-05 10:35:59.120873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.257 [2024-11-05 10:35:59.120902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.257 [2024-11-05 10:35:59.120977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.257 [2024-11-05 10:35:59.120993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.257 [2024-11-05 10:35:59.121048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.257 [2024-11-05 10:35:59.121061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.515 NEW_FUNC[1/716]: 0x44c568 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:08:33.515 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:33.515 #10 NEW cov: 12238 ft: 12229 corp: 2/32b lim: 40 exec/s: 0 rss: 73Mb L: 31/31 MS: 3 ChangeBit-CrossOver-InsertRepeatedBytes- 00:08:33.515 [2024-11-05 10:35:59.442224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0b000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.515 [2024-11-05 10:35:59.442261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.515 [2024-11-05 10:35:59.442325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.515 [2024-11-05 10:35:59.442341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.515 [2024-11-05 10:35:59.442399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.515 [2024-11-05 10:35:59.442413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.515 [2024-11-05 10:35:59.442471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.515 [2024-11-05 10:35:59.442484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:33.516 [2024-11-05 10:35:59.442543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.516 [2024-11-05 10:35:59.442556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:33.516 #26 NEW cov: 12351 ft: 12986 corp: 3/72b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:08:33.516 [2024-11-05 10:35:59.502121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.516 [2024-11-05 10:35:59.502149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.516 [2024-11-05 10:35:59.502210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.516 [2024-11-05 10:35:59.502224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.516 [2024-11-05 10:35:59.502284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.516 [2024-11-05 10:35:59.502298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.516 [2024-11-05 10:35:59.502357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.516 [2024-11-05 10:35:59.502371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:33.516 #27 NEW cov: 12357 ft: 13338 corp: 4/107b lim: 40 exec/s: 0 rss: 73Mb L: 35/40 MS: 1 InsertRepeatedBytes- 00:08:33.516 [2024-11-05 10:35:59.542130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.516 [2024-11-05 10:35:59.542157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.516 [2024-11-05 10:35:59.542234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.516 [2024-11-05 10:35:59.542249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.516 [2024-11-05 10:35:59.542310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.516 [2024-11-05 10:35:59.542325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.516 [2024-11-05 10:35:59.542386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.516 [2024-11-05 10:35:59.542400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:33.516 #33 NEW cov: 12442 ft: 13668 corp: 5/142b lim: 40 exec/s: 0 rss: 73Mb L: 35/40 MS: 1 ChangeBinInt- 00:08:33.786 [2024-11-05 10:35:59.602372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.786 [2024-11-05 10:35:59.602398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.786 [2024-11-05 10:35:59.602459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.786 [2024-11-05 10:35:59.602474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.786 [2024-11-05 10:35:59.602532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.602546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.787 [2024-11-05 10:35:59.602606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.602621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:33.787 #34 NEW cov: 12442 ft: 13783 corp: 6/177b lim: 40 exec/s: 0 rss: 73Mb L: 35/40 MS: 1 CopyPart- 00:08:33.787 [2024-11-05 10:35:59.662336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:8b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.662362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.787 [2024-11-05 10:35:59.662439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.662455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.787 [2024-11-05 10:35:59.662516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.662531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.787 #35 NEW cov: 12442 ft: 13890 corp: 7/208b lim: 40 exec/s: 0 rss: 73Mb L: 31/40 MS: 1 ChangeBit- 00:08:33.787 [2024-11-05 10:35:59.702407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.702433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.787 [2024-11-05 10:35:59.702512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.702527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.787 [2024-11-05 10:35:59.702593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.702607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.787 #36 NEW cov: 12442 ft: 13934 corp: 8/235b lim: 40 exec/s: 0 rss: 73Mb L: 27/40 MS: 1 CrossOver- 00:08:33.787 [2024-11-05 10:35:59.742752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a00010b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.742779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.787 [2024-11-05 10:35:59.742839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.742854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.787 [2024-11-05 10:35:59.742911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.742925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.787 [2024-11-05 10:35:59.742980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0b0b0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.742994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:33.787 #37 NEW cov: 12442 ft: 13990 corp: 9/270b lim: 40 exec/s: 0 rss: 73Mb L: 35/40 MS: 1 CrossOver- 00:08:33.787 [2024-11-05 10:35:59.782920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.782947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.787 [2024-11-05 10:35:59.783025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.783040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.787 [2024-11-05 10:35:59.783100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.783114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.787 [2024-11-05 10:35:59.783174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00290000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.783188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:33.787 #38 NEW cov: 12442 ft: 14051 corp: 10/306b lim: 40 exec/s: 0 rss: 73Mb L: 36/40 MS: 1 InsertByte- 00:08:33.787 [2024-11-05 10:35:59.823216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0b000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.823243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.787 [2024-11-05 10:35:59.823302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.823316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.787 [2024-11-05 10:35:59.823376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.823391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.787 [2024-11-05 10:35:59.823450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.823464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:33.787 [2024-11-05 10:35:59.823523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:0b0b0b0b cdw11:0b00000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.787 [2024-11-05 10:35:59.823538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:34.049 #39 NEW cov: 12442 ft: 14116 corp: 11/346b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 CrossOver- 00:08:34.049 [2024-11-05 10:35:59.882995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:35:59.883021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.049 [2024-11-05 10:35:59.883084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:35:59.883099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.049 [2024-11-05 10:35:59.883157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0a0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:35:59.883171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.049 #40 NEW cov: 12442 ft: 14127 corp: 12/377b lim: 40 exec/s: 0 rss: 73Mb L: 31/40 MS: 1 ChangeBit- 00:08:34.049 [2024-11-05 10:35:59.922726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:47000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:35:59.922752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.049 #42 NEW cov: 12442 ft: 14886 corp: 13/385b lim: 40 exec/s: 0 rss: 73Mb L: 8/40 MS: 2 InsertByte-CrossOver- 00:08:34.049 [2024-11-05 10:35:59.963599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0b000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:35:59.963625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.049 [2024-11-05 10:35:59.963701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:35:59.963721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.049 [2024-11-05 10:35:59.963779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b0b0b cdw11:4b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:35:59.963794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.049 [2024-11-05 10:35:59.963852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:35:59.963866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.049 [2024-11-05 10:35:59.963928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:0b0b0b0b cdw11:0b00000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:35:59.963943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:34.049 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:34.049 #43 NEW cov: 12465 ft: 14931 corp: 14/425b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 ChangeBit- 00:08:34.049 [2024-11-05 10:36:00.023624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0b000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:36:00.023656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.049 [2024-11-05 10:36:00.023729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:36:00.023745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.049 [2024-11-05 10:36:00.023803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:36:00.023818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.049 [2024-11-05 10:36:00.023880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:36:00.023893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.049 #44 NEW cov: 12465 ft: 14979 corp: 15/464b lim: 40 exec/s: 0 rss: 73Mb L: 39/40 MS: 1 EraseBytes- 00:08:34.049 [2024-11-05 10:36:00.063783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:36:00.063815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.049 [2024-11-05 10:36:00.063874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:36:00.063890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.049 [2024-11-05 10:36:00.063951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:36:00.063966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.049 [2024-11-05 10:36:00.064024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:36:00.064038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.049 #45 NEW cov: 12465 ft: 14984 corp: 16/499b lim: 40 exec/s: 45 rss: 74Mb L: 35/40 MS: 1 CrossOver- 00:08:34.049 [2024-11-05 10:36:00.123954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aa7a7a7 cdw11:a7a7a7a7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:36:00.123984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.049 [2024-11-05 10:36:00.124046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a7a7a7a7 cdw11:a7a7a7a7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:36:00.124068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.049 [2024-11-05 10:36:00.124126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a7a7a7a7 cdw11:a7a7a7a7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:36:00.124140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.049 [2024-11-05 10:36:00.124202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:a7a7a7a7 cdw11:a7a7a7a7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.049 [2024-11-05 10:36:00.124217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.307 #46 NEW cov: 12465 ft: 14996 corp: 17/534b lim: 40 exec/s: 46 rss: 74Mb L: 35/40 MS: 1 InsertRepeatedBytes- 00:08:34.307 [2024-11-05 10:36:00.164039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.308 [2024-11-05 10:36:00.164068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.308 [2024-11-05 10:36:00.164130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.308 [2024-11-05 10:36:00.164146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.308 [2024-11-05 10:36:00.164203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b2828 cdw11:2828280b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.308 [2024-11-05 10:36:00.164217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.308 [2024-11-05 10:36:00.164278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0b0b0b0a cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.308 [2024-11-05 10:36:00.164293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.308 #47 NEW cov: 12465 ft: 15045 corp: 18/570b lim: 40 exec/s: 47 rss: 74Mb L: 36/40 MS: 1 InsertRepeatedBytes- 00:08:34.308 [2024-11-05 10:36:00.224065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.308 [2024-11-05 10:36:00.224093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.308 [2024-11-05 10:36:00.224155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.308 [2024-11-05 10:36:00.224170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.308 [2024-11-05 10:36:00.224228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.308 [2024-11-05 10:36:00.224242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.308 #48 NEW cov: 12465 ft: 15092 corp: 19/601b lim: 40 exec/s: 48 rss: 74Mb L: 31/40 MS: 1 ChangeByte- 00:08:34.308 [2024-11-05 10:36:00.264568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0b000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.308 [2024-11-05 10:36:00.264594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.308 [2024-11-05 10:36:00.264674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.308 [2024-11-05 10:36:00.264694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.308 [2024-11-05 10:36:00.264754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b0b0b cdw11:4b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.308 [2024-11-05 10:36:00.264768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.308 [2024-11-05 10:36:00.264843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.308 [2024-11-05 10:36:00.264858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.308 [2024-11-05 10:36:00.264918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:0b0b0b0b cdw11:0b00000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.308 [2024-11-05 10:36:00.264933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:34.308 #49 NEW cov: 12465 ft: 15105 corp: 20/641b lim: 40 exec/s: 49 rss: 74Mb L: 40/40 MS: 1 ShuffleBytes- 00:08:34.308 [2024-11-05 10:36:00.324531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0b000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.308 [2024-11-05 10:36:00.324557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.308 [2024-11-05 10:36:00.324618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.308 [2024-11-05 10:36:00.324633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.308 [2024-11-05 10:36:00.324692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.308 [2024-11-05 10:36:00.324706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.308 [2024-11-05 10:36:00.324775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0bff0100 cdw11:000b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.308 [2024-11-05 10:36:00.324790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.308 #50 NEW cov: 12465 ft: 15120 corp: 21/680b lim: 40 exec/s: 50 rss: 74Mb L: 39/40 MS: 1 CMP- DE: "\377\001\000\000"- 00:08:34.308 [2024-11-05 10:36:00.384681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.308 [2024-11-05 10:36:00.384707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.308 [2024-11-05 10:36:00.384772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.308 [2024-11-05 10:36:00.384786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.308 [2024-11-05 10:36:00.384843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.308 [2024-11-05 10:36:00.384858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.308 [2024-11-05 10:36:00.384907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.308 [2024-11-05 10:36:00.384923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.567 #51 NEW cov: 12465 ft: 15137 corp: 22/715b lim: 40 exec/s: 51 rss: 74Mb L: 35/40 MS: 1 ShuffleBytes- 00:08:34.567 [2024-11-05 10:36:00.424850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.567 [2024-11-05 10:36:00.424876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.567 [2024-11-05 10:36:00.424939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b030b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.567 [2024-11-05 10:36:00.424955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.567 [2024-11-05 10:36:00.425015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b2828 cdw11:2828280b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.567 [2024-11-05 10:36:00.425030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.567 [2024-11-05 10:36:00.425088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0b0b0b0a cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.567 [2024-11-05 10:36:00.425102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.567 #52 NEW cov: 12465 ft: 15166 corp: 23/751b lim: 40 exec/s: 52 rss: 74Mb L: 36/40 MS: 1 ChangeBinInt- 00:08:34.567 [2024-11-05 10:36:00.485024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.567 [2024-11-05 10:36:00.485051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.567 [2024-11-05 10:36:00.485115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.567 [2024-11-05 10:36:00.485130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.567 [2024-11-05 10:36:00.485191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:01000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.567 [2024-11-05 10:36:00.485206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.567 [2024-11-05 10:36:00.485262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.567 [2024-11-05 10:36:00.485277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.567 #53 NEW cov: 12465 ft: 15175 corp: 24/786b lim: 40 exec/s: 53 rss: 74Mb L: 35/40 MS: 1 PersAutoDict- DE: "\377\001\000\000"- 00:08:34.567 [2024-11-05 10:36:00.545027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:8b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.567 [2024-11-05 10:36:00.545052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.567 [2024-11-05 10:36:00.545112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.567 [2024-11-05 10:36:00.545128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.567 [2024-11-05 10:36:00.545189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.568 [2024-11-05 10:36:00.545206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.568 #54 NEW cov: 12465 ft: 15203 corp: 25/813b lim: 40 exec/s: 54 rss: 74Mb L: 27/40 MS: 1 EraseBytes- 00:08:34.568 [2024-11-05 10:36:00.605320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0b000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.568 [2024-11-05 10:36:00.605345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.568 [2024-11-05 10:36:00.605407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.568 [2024-11-05 10:36:00.605422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.568 [2024-11-05 10:36:00.605480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b3d0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.568 [2024-11-05 10:36:00.605494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.568 [2024-11-05 10:36:00.605554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.568 [2024-11-05 10:36:00.605568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.568 #55 NEW cov: 12465 ft: 15287 corp: 26/852b lim: 40 exec/s: 55 rss: 74Mb L: 39/40 MS: 1 ChangeByte- 00:08:34.568 [2024-11-05 10:36:00.645664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0b000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.568 [2024-11-05 10:36:00.645691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.568 [2024-11-05 10:36:00.645748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.568 [2024-11-05 10:36:00.645763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.568 [2024-11-05 10:36:00.645823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:090b0b0b cdw11:4b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.568 [2024-11-05 10:36:00.645837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.568 [2024-11-05 10:36:00.645899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.568 [2024-11-05 10:36:00.645912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.826 [2024-11-05 10:36:00.645969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:0b0b0b0b cdw11:0b00000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.826 [2024-11-05 10:36:00.645987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:34.826 #56 NEW cov: 12465 ft: 15297 corp: 27/892b lim: 40 exec/s: 56 rss: 74Mb L: 40/40 MS: 1 ChangeBit- 00:08:34.826 [2024-11-05 10:36:00.685152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.826 [2024-11-05 10:36:00.685177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.826 [2024-11-05 10:36:00.685238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00002900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.826 [2024-11-05 10:36:00.685255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.826 #57 NEW cov: 12465 ft: 15503 corp: 28/913b lim: 40 exec/s: 57 rss: 74Mb L: 21/40 MS: 1 EraseBytes- 00:08:34.826 [2024-11-05 10:36:00.745313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:8b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.826 [2024-11-05 10:36:00.745339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.826 [2024-11-05 10:36:00.745403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.826 [2024-11-05 10:36:00.745418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.827 #58 NEW cov: 12465 ft: 15520 corp: 29/934b lim: 40 exec/s: 58 rss: 74Mb L: 21/40 MS: 1 EraseBytes- 00:08:34.827 [2024-11-05 10:36:00.805672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:8b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.827 [2024-11-05 10:36:00.805697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.827 [2024-11-05 10:36:00.805782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b270b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.827 [2024-11-05 10:36:00.805798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.827 [2024-11-05 10:36:00.805855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.827 [2024-11-05 10:36:00.805869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.827 #59 NEW cov: 12465 ft: 15527 corp: 30/961b lim: 40 exec/s: 59 rss: 74Mb L: 27/40 MS: 1 ChangeByte- 00:08:34.827 [2024-11-05 10:36:00.846076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000100 cdw11:0000b51f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.827 [2024-11-05 10:36:00.846102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.827 [2024-11-05 10:36:00.846167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:13f87e7f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.827 [2024-11-05 10:36:00.846182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.827 [2024-11-05 10:36:00.846241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.827 [2024-11-05 10:36:00.846256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.827 [2024-11-05 10:36:00.846315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.827 [2024-11-05 10:36:00.846328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.827 #60 NEW cov: 12465 ft: 15528 corp: 31/996b lim: 40 exec/s: 60 rss: 74Mb L: 35/40 MS: 1 CMP- DE: "\265\037\023\370~\177\000\000"- 00:08:34.827 [2024-11-05 10:36:00.886119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:8bff0100 cdw11:000b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.827 [2024-11-05 10:36:00.886145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.827 [2024-11-05 10:36:00.886211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.827 [2024-11-05 10:36:00.886226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.827 [2024-11-05 10:36:00.886284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.827 [2024-11-05 10:36:00.886299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.827 [2024-11-05 10:36:00.886356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.827 [2024-11-05 10:36:00.886370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:35.086 #61 NEW cov: 12465 ft: 15577 corp: 32/1031b lim: 40 exec/s: 61 rss: 74Mb L: 35/40 MS: 1 PersAutoDict- DE: "\377\001\000\000"- 00:08:35.086 [2024-11-05 10:36:00.926315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.086 [2024-11-05 10:36:00.926341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.086 [2024-11-05 10:36:00.926402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0b0b0b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.086 [2024-11-05 10:36:00.926417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.086 [2024-11-05 10:36:00.926475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0b2828 cdw11:2828280b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.086 [2024-11-05 10:36:00.926489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:35.086 [2024-11-05 10:36:00.926545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0b0b0b0a cdw11:0b0b0b04 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.086 [2024-11-05 10:36:00.926559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:35.086 #62 NEW cov: 12465 ft: 15607 corp: 33/1068b lim: 40 exec/s: 62 rss: 74Mb L: 37/40 MS: 1 InsertByte- 00:08:35.086 [2024-11-05 10:36:00.966366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0b000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.086 [2024-11-05 10:36:00.966391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.086 [2024-11-05 10:36:00.966451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000b0b cdw11:0b0b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.086 [2024-11-05 10:36:00.966465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.086 [2024-11-05 10:36:00.966523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0b0bf5f4 cdw11:f4f4f4f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.086 [2024-11-05 10:36:00.966537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:35.086 [2024-11-05 10:36:00.966595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:f3f70100 cdw11:000b0b0b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.086 [2024-11-05 10:36:00.966609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:35.086 #63 NEW cov: 12465 ft: 15633 corp: 34/1107b lim: 40 exec/s: 63 rss: 74Mb L: 39/40 MS: 1 ChangeBinInt- 00:08:35.086 [2024-11-05 10:36:01.026521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.086 [2024-11-05 10:36:01.026546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.086 [2024-11-05 10:36:01.026607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.086 [2024-11-05 10:36:01.026622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.086 [2024-11-05 10:36:01.026681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:01000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.086 [2024-11-05 10:36:01.026695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:35.086 [2024-11-05 10:36:01.026745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.086 [2024-11-05 10:36:01.026761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:35.086 #64 NEW cov: 12465 ft: 15647 corp: 35/1142b lim: 40 exec/s: 64 rss: 74Mb L: 35/40 MS: 1 ChangeBit- 00:08:35.086 [2024-11-05 10:36:01.086717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.086 [2024-11-05 10:36:01.086742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.086 [2024-11-05 10:36:01.086800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.086 [2024-11-05 10:36:01.086815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.086 [2024-11-05 10:36:01.086874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.086 [2024-11-05 10:36:01.086887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:35.086 [2024-11-05 10:36:01.086943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00290000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.086 [2024-11-05 10:36:01.086957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:35.086 #65 NEW cov: 12465 ft: 15659 corp: 36/1178b lim: 40 exec/s: 32 rss: 74Mb L: 36/40 MS: 1 ShuffleBytes- 00:08:35.086 #65 DONE cov: 12465 ft: 15659 corp: 36/1178b lim: 40 exec/s: 32 rss: 74Mb 00:08:35.086 ###### Recommended dictionary. ###### 00:08:35.086 "\377\001\000\000" # Uses: 2 00:08:35.086 "\265\037\023\370~\177\000\000" # Uses: 0 00:08:35.086 ###### End of recommended dictionary. ###### 00:08:35.086 Done 65 runs in 2 second(s) 00:08:35.343 10:36:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:08:35.343 10:36:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:35.343 10:36:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:35.343 10:36:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:08:35.343 10:36:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:08:35.343 10:36:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:35.343 10:36:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:35.343 10:36:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:35.343 10:36:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:08:35.343 10:36:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:35.343 10:36:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:35.343 10:36:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:08:35.343 10:36:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:08:35.343 10:36:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:35.343 10:36:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:08:35.343 10:36:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:35.343 10:36:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:35.343 10:36:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:35.343 10:36:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:08:35.343 [2024-11-05 10:36:01.267095] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:35.343 [2024-11-05 10:36:01.267163] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2864965 ] 00:08:35.601 [2024-11-05 10:36:01.538941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.601 [2024-11-05 10:36:01.586775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.601 [2024-11-05 10:36:01.650687] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.601 [2024-11-05 10:36:01.666933] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:08:35.859 INFO: Running with entropic power schedule (0xFF, 100). 00:08:35.859 INFO: Seed: 1789098259 00:08:35.859 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:35.859 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:35.859 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:35.859 INFO: A corpus is not provided, starting from an empty corpus 00:08:35.859 #2 INITED exec/s: 0 rss: 66Mb 00:08:35.859 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:35.859 This may also happen if the target rejected all inputs we tried so far 00:08:35.859 [2024-11-05 10:36:01.716665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.859 [2024-11-05 10:36:01.716695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.859 [2024-11-05 10:36:01.716773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff0b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.859 [2024-11-05 10:36:01.716789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.117 NEW_FUNC[1/715]: 0x44e138 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:08:36.117 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:36.117 #24 NEW cov: 12226 ft: 12222 corp: 2/17b lim: 40 exec/s: 0 rss: 73Mb L: 16/16 MS: 2 ChangeBit-InsertRepeatedBytes- 00:08:36.117 [2024-11-05 10:36:02.037689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.117 [2024-11-05 10:36:02.037730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.117 [2024-11-05 10:36:02.037791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.117 [2024-11-05 10:36:02.037806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.117 [2024-11-05 10:36:02.037862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.117 [2024-11-05 10:36:02.037877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.117 #25 NEW cov: 12339 ft: 12912 corp: 3/46b lim: 40 exec/s: 0 rss: 73Mb L: 29/29 MS: 1 CopyPart- 00:08:36.117 [2024-11-05 10:36:02.097637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.117 [2024-11-05 10:36:02.097664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.118 [2024-11-05 10:36:02.097723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff10ffff cdw11:ffffff0b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.118 [2024-11-05 10:36:02.097738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.118 #26 NEW cov: 12345 ft: 13078 corp: 4/62b lim: 40 exec/s: 0 rss: 73Mb L: 16/29 MS: 1 ChangeBinInt- 00:08:36.118 [2024-11-05 10:36:02.137709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.118 [2024-11-05 10:36:02.137756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.118 [2024-11-05 10:36:02.137815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff0b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.118 [2024-11-05 10:36:02.137830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.118 #27 NEW cov: 12430 ft: 13559 corp: 5/78b lim: 40 exec/s: 0 rss: 73Mb L: 16/29 MS: 1 ShuffleBytes- 00:08:36.118 [2024-11-05 10:36:02.177842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffff2aff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.118 [2024-11-05 10:36:02.177880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.118 [2024-11-05 10:36:02.177955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffff10ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.118 [2024-11-05 10:36:02.177970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.376 #28 NEW cov: 12430 ft: 13628 corp: 6/95b lim: 40 exec/s: 0 rss: 73Mb L: 17/29 MS: 1 InsertByte- 00:08:36.376 [2024-11-05 10:36:02.237883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:003a71c8 cdw11:3b416c6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.376 [2024-11-05 10:36:02.237908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.376 #32 NEW cov: 12430 ft: 14020 corp: 7/104b lim: 40 exec/s: 0 rss: 73Mb L: 9/29 MS: 4 ChangeBinInt-ShuffleBytes-CrossOver-CMP- DE: "\000:q\310;Aln"- 00:08:36.376 [2024-11-05 10:36:02.278256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.376 [2024-11-05 10:36:02.278285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.376 [2024-11-05 10:36:02.278346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:003a71c8 cdw11:3b416c6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.376 [2024-11-05 10:36:02.278361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.377 [2024-11-05 10:36:02.278418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff0b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.377 [2024-11-05 10:36:02.278433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.377 #33 NEW cov: 12430 ft: 14082 corp: 8/128b lim: 40 exec/s: 0 rss: 73Mb L: 24/29 MS: 1 PersAutoDict- DE: "\000:q\310;Aln"- 00:08:36.377 [2024-11-05 10:36:02.338431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:fdffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.377 [2024-11-05 10:36:02.338457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.377 [2024-11-05 10:36:02.338517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.377 [2024-11-05 10:36:02.338532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.377 [2024-11-05 10:36:02.338591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.377 [2024-11-05 10:36:02.338604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.377 #36 NEW cov: 12430 ft: 14102 corp: 9/158b lim: 40 exec/s: 0 rss: 73Mb L: 30/30 MS: 3 CrossOver-ChangeBit-CrossOver- 00:08:36.377 [2024-11-05 10:36:02.378431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.377 [2024-11-05 10:36:02.378457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.377 [2024-11-05 10:36:02.378517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff0b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.377 [2024-11-05 10:36:02.378532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.377 #37 NEW cov: 12430 ft: 14126 corp: 10/174b lim: 40 exec/s: 0 rss: 73Mb L: 16/30 MS: 1 ShuffleBytes- 00:08:36.377 [2024-11-05 10:36:02.418396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.377 [2024-11-05 10:36:02.418421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.635 #38 NEW cov: 12430 ft: 14240 corp: 11/188b lim: 40 exec/s: 0 rss: 73Mb L: 14/30 MS: 1 EraseBytes- 00:08:36.635 [2024-11-05 10:36:02.478817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.635 [2024-11-05 10:36:02.478844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.635 [2024-11-05 10:36:02.478904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff0b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.635 [2024-11-05 10:36:02.478922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.635 #39 NEW cov: 12430 ft: 14280 corp: 12/204b lim: 40 exec/s: 0 rss: 73Mb L: 16/30 MS: 1 ChangeBinInt- 00:08:36.635 [2024-11-05 10:36:02.518928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.635 [2024-11-05 10:36:02.518954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.635 [2024-11-05 10:36:02.519014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff0b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.635 [2024-11-05 10:36:02.519029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.635 #40 NEW cov: 12430 ft: 14346 corp: 13/220b lim: 40 exec/s: 0 rss: 73Mb L: 16/30 MS: 1 ShuffleBytes- 00:08:36.635 [2024-11-05 10:36:02.559160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.635 [2024-11-05 10:36:02.559186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.635 [2024-11-05 10:36:02.559263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:003a71c8 cdw11:3b416c6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.635 [2024-11-05 10:36:02.559278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.635 [2024-11-05 10:36:02.559337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff0b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.635 [2024-11-05 10:36:02.559352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.635 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:36.635 #41 NEW cov: 12453 ft: 14388 corp: 14/244b lim: 40 exec/s: 0 rss: 73Mb L: 24/30 MS: 1 CopyPart- 00:08:36.635 [2024-11-05 10:36:02.619345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.635 [2024-11-05 10:36:02.619371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.635 [2024-11-05 10:36:02.619449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:003a71c8 cdw11:3b416c6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.635 [2024-11-05 10:36:02.619464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.635 [2024-11-05 10:36:02.619524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff0b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.635 [2024-11-05 10:36:02.619539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.635 #42 NEW cov: 12453 ft: 14396 corp: 15/268b lim: 40 exec/s: 0 rss: 73Mb L: 24/30 MS: 1 ShuffleBytes- 00:08:36.635 [2024-11-05 10:36:02.659552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.635 [2024-11-05 10:36:02.659579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.635 [2024-11-05 10:36:02.659657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:003a71c8 cdw11:3b416c6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.635 [2024-11-05 10:36:02.659673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.635 [2024-11-05 10:36:02.659731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.635 [2024-11-05 10:36:02.659745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.635 [2024-11-05 10:36:02.659805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffff003a cdw11:71c83b41 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.635 [2024-11-05 10:36:02.659820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:36.635 #43 NEW cov: 12453 ft: 14894 corp: 16/304b lim: 40 exec/s: 0 rss: 74Mb L: 36/36 MS: 1 CopyPart- 00:08:36.635 [2024-11-05 10:36:02.699728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:003a71c8 cdw11:3b416c6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.635 [2024-11-05 10:36:02.699771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.635 [2024-11-05 10:36:02.699832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.635 [2024-11-05 10:36:02.699847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.636 [2024-11-05 10:36:02.699906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:003a71c8 cdw11:3b416c6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.636 [2024-11-05 10:36:02.699920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.636 [2024-11-05 10:36:02.699980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff0b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.636 [2024-11-05 10:36:02.699994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:36.894 #44 NEW cov: 12453 ft: 14911 corp: 17/336b lim: 40 exec/s: 44 rss: 74Mb L: 32/36 MS: 1 PersAutoDict- DE: "\000:q\310;Aln"- 00:08:36.894 [2024-11-05 10:36:02.759620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffff28 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.894 [2024-11-05 10:36:02.759646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.894 [2024-11-05 10:36:02.759724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff0b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.894 [2024-11-05 10:36:02.759740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.894 #45 NEW cov: 12453 ft: 14926 corp: 18/352b lim: 40 exec/s: 45 rss: 74Mb L: 16/36 MS: 1 ChangeByte- 00:08:36.894 [2024-11-05 10:36:02.819613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:003a71c8 cdw11:3b416c6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.894 [2024-11-05 10:36:02.819639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.894 #46 NEW cov: 12453 ft: 14946 corp: 19/366b lim: 40 exec/s: 46 rss: 74Mb L: 14/36 MS: 1 PersAutoDict- DE: "\000:q\310;Aln"- 00:08:36.894 [2024-11-05 10:36:02.880126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff00fdff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.894 [2024-11-05 10:36:02.880151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.894 [2024-11-05 10:36:02.880212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:003a71c8 cdw11:3b416c6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.894 [2024-11-05 10:36:02.880230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.894 [2024-11-05 10:36:02.880292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff0b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.894 [2024-11-05 10:36:02.880306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.894 #47 NEW cov: 12453 ft: 15021 corp: 20/390b lim: 40 exec/s: 47 rss: 74Mb L: 24/36 MS: 1 ChangeBinInt- 00:08:36.894 [2024-11-05 10:36:02.919896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffff2aff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.894 [2024-11-05 10:36:02.919922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.894 #48 NEW cov: 12453 ft: 15037 corp: 21/404b lim: 40 exec/s: 48 rss: 74Mb L: 14/36 MS: 1 EraseBytes- 00:08:37.152 [2024-11-05 10:36:02.980386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:003a71c8 cdw11:3b416c6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.152 [2024-11-05 10:36:02.980413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.152 [2024-11-05 10:36:02.980492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:003a71c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.152 [2024-11-05 10:36:02.980508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.152 [2024-11-05 10:36:02.980568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:3b416c6e cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.152 [2024-11-05 10:36:02.980583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:37.152 #49 NEW cov: 12453 ft: 15108 corp: 22/432b lim: 40 exec/s: 49 rss: 74Mb L: 28/36 MS: 1 EraseBytes- 00:08:37.152 [2024-11-05 10:36:03.040599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffff3a3a cdw11:3a3affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.152 [2024-11-05 10:36:03.040625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.152 [2024-11-05 10:36:03.040685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff00fdff cdw11:003a71c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.152 [2024-11-05 10:36:03.040700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.152 [2024-11-05 10:36:03.040758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:3b416c6e cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.152 [2024-11-05 10:36:03.040773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:37.152 #50 NEW cov: 12453 ft: 15153 corp: 23/460b lim: 40 exec/s: 50 rss: 74Mb L: 28/36 MS: 1 InsertRepeatedBytes- 00:08:37.152 [2024-11-05 10:36:03.100466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.152 [2024-11-05 10:36:03.100491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.152 #51 NEW cov: 12453 ft: 15163 corp: 24/474b lim: 40 exec/s: 51 rss: 74Mb L: 14/36 MS: 1 EraseBytes- 00:08:37.152 [2024-11-05 10:36:03.140525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.152 [2024-11-05 10:36:03.140553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.152 #55 NEW cov: 12453 ft: 15165 corp: 25/483b lim: 40 exec/s: 55 rss: 74Mb L: 9/36 MS: 4 ChangeByte-ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:08:37.152 [2024-11-05 10:36:03.180632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:003a71c8 cdw11:3b416c6f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.152 [2024-11-05 10:36:03.180657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.152 #56 NEW cov: 12453 ft: 15169 corp: 26/497b lim: 40 exec/s: 56 rss: 74Mb L: 14/36 MS: 1 ChangeBinInt- 00:08:37.410 [2024-11-05 10:36:03.241186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:fdffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.410 [2024-11-05 10:36:03.241212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.410 [2024-11-05 10:36:03.241271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.410 [2024-11-05 10:36:03.241286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.410 [2024-11-05 10:36:03.241342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffdfff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.410 [2024-11-05 10:36:03.241355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:37.410 #57 NEW cov: 12453 ft: 15189 corp: 27/527b lim: 40 exec/s: 57 rss: 74Mb L: 30/36 MS: 1 ChangeBit- 00:08:37.410 [2024-11-05 10:36:03.301290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.410 [2024-11-05 10:36:03.301315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.410 [2024-11-05 10:36:03.301391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.410 [2024-11-05 10:36:03.301406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.410 [2024-11-05 10:36:03.301464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:fdffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.410 [2024-11-05 10:36:03.301477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:37.410 #58 NEW cov: 12453 ft: 15193 corp: 28/556b lim: 40 exec/s: 58 rss: 74Mb L: 29/36 MS: 1 ChangeBit- 00:08:37.410 [2024-11-05 10:36:03.361638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.411 [2024-11-05 10:36:03.361664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.411 [2024-11-05 10:36:03.361724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.411 [2024-11-05 10:36:03.361740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.411 [2024-11-05 10:36:03.361801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:fdffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.411 [2024-11-05 10:36:03.361815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:37.411 [2024-11-05 10:36:03.361879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.411 [2024-11-05 10:36:03.361893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:37.411 #59 NEW cov: 12453 ft: 15205 corp: 29/592b lim: 40 exec/s: 59 rss: 74Mb L: 36/36 MS: 1 CrossOver- 00:08:37.411 [2024-11-05 10:36:03.421571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff07 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.411 [2024-11-05 10:36:03.421596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.411 [2024-11-05 10:36:03.421671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:ffffff0b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.411 [2024-11-05 10:36:03.421686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.411 #60 NEW cov: 12453 ft: 15221 corp: 30/608b lim: 40 exec/s: 60 rss: 74Mb L: 16/36 MS: 1 ChangeBinInt- 00:08:37.411 [2024-11-05 10:36:03.461820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff00fdff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.411 [2024-11-05 10:36:03.461845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.411 [2024-11-05 10:36:03.461903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:003a71c8 cdw11:3b416c6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.411 [2024-11-05 10:36:03.461918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.411 [2024-11-05 10:36:03.461975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff230b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.411 [2024-11-05 10:36:03.461989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:37.411 #61 NEW cov: 12453 ft: 15262 corp: 31/632b lim: 40 exec/s: 61 rss: 74Mb L: 24/36 MS: 1 ChangeByte- 00:08:37.669 [2024-11-05 10:36:03.501639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.669 [2024-11-05 10:36:03.501666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.669 #62 NEW cov: 12453 ft: 15291 corp: 32/646b lim: 40 exec/s: 62 rss: 74Mb L: 14/36 MS: 1 CopyPart- 00:08:37.669 [2024-11-05 10:36:03.561958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffff2aff cdw11:ffffff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.669 [2024-11-05 10:36:03.561983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.669 [2024-11-05 10:36:03.562044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:000018ff cdw11:ffff10ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.669 [2024-11-05 10:36:03.562059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.669 #63 NEW cov: 12453 ft: 15352 corp: 33/667b lim: 40 exec/s: 63 rss: 74Mb L: 21/36 MS: 1 CMP- DE: "\001\000\000\030"- 00:08:37.669 [2024-11-05 10:36:03.602398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.669 [2024-11-05 10:36:03.602423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.669 [2024-11-05 10:36:03.602488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:003a71c8 cdw11:3b416c6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.669 [2024-11-05 10:36:03.602503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.669 [2024-11-05 10:36:03.602563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.669 [2024-11-05 10:36:03.602576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:37.669 [2024-11-05 10:36:03.602636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ff003a71 cdw11:c83b416c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.669 [2024-11-05 10:36:03.602650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:37.669 #64 NEW cov: 12453 ft: 15376 corp: 34/705b lim: 40 exec/s: 64 rss: 75Mb L: 38/38 MS: 1 CopyPart- 00:08:37.670 [2024-11-05 10:36:03.662389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.670 [2024-11-05 10:36:03.662414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.670 [2024-11-05 10:36:03.662472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:003a71c8 cdw11:f2416c6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.670 [2024-11-05 10:36:03.662487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.670 [2024-11-05 10:36:03.662546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff0b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.670 [2024-11-05 10:36:03.662560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:37.670 #65 NEW cov: 12453 ft: 15399 corp: 35/729b lim: 40 exec/s: 65 rss: 75Mb L: 24/38 MS: 1 ChangeByte- 00:08:37.670 [2024-11-05 10:36:03.702648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:003a71c8 cdw11:3b416c6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.670 [2024-11-05 10:36:03.702674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.670 [2024-11-05 10:36:03.702729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:8b8b8b8b cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.670 [2024-11-05 10:36:03.702745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.670 [2024-11-05 10:36:03.702801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:003a71c8 cdw11:3b416c6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.670 [2024-11-05 10:36:03.702815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:37.670 [2024-11-05 10:36:03.702873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff0b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:37.670 [2024-11-05 10:36:03.702888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:37.670 #66 NEW cov: 12453 ft: 15412 corp: 36/761b lim: 40 exec/s: 33 rss: 75Mb L: 32/38 MS: 1 InsertRepeatedBytes- 00:08:37.670 #66 DONE cov: 12453 ft: 15412 corp: 36/761b lim: 40 exec/s: 33 rss: 75Mb 00:08:37.670 ###### Recommended dictionary. ###### 00:08:37.670 "\000:q\310;Aln" # Uses: 3 00:08:37.670 "\001\000\000\030" # Uses: 0 00:08:37.670 ###### End of recommended dictionary. ###### 00:08:37.670 Done 66 runs in 2 second(s) 00:08:37.928 10:36:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:08:37.928 10:36:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:37.928 10:36:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:37.928 10:36:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:08:37.928 10:36:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:08:37.928 10:36:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:37.928 10:36:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:37.928 10:36:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:37.928 10:36:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:08:37.928 10:36:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:37.928 10:36:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:37.928 10:36:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:08:37.928 10:36:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:08:37.928 10:36:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:37.928 10:36:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:08:37.928 10:36:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:37.928 10:36:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:37.928 10:36:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:37.928 10:36:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:08:37.928 [2024-11-05 10:36:03.902138] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:37.928 [2024-11-05 10:36:03.902221] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2865334 ] 00:08:38.187 [2024-11-05 10:36:04.169159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.187 [2024-11-05 10:36:04.217042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.446 [2024-11-05 10:36:04.281001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.446 [2024-11-05 10:36:04.297240] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:08:38.446 INFO: Running with entropic power schedule (0xFF, 100). 00:08:38.446 INFO: Seed: 124129986 00:08:38.446 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:38.446 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:38.446 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:38.446 INFO: A corpus is not provided, starting from an empty corpus 00:08:38.446 #2 INITED exec/s: 0 rss: 66Mb 00:08:38.446 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:38.446 This may also happen if the target rejected all inputs we tried so far 00:08:38.446 [2024-11-05 10:36:04.369003] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.446 [2024-11-05 10:36:04.369060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.446 [2024-11-05 10:36:04.369186] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.446 [2024-11-05 10:36:04.369215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.012 NEW_FUNC[1/716]: 0x44fd08 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:08:39.012 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:39.012 #5 NEW cov: 12220 ft: 12216 corp: 2/17b lim: 35 exec/s: 0 rss: 73Mb L: 16/16 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:08:39.012 [2024-11-05 10:36:04.870356] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.012 [2024-11-05 10:36:04.870410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.012 [2024-11-05 10:36:04.870522] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.012 [2024-11-05 10:36:04.870549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.012 #16 NEW cov: 12333 ft: 12796 corp: 3/33b lim: 35 exec/s: 0 rss: 73Mb L: 16/16 MS: 1 CrossOver- 00:08:39.012 [2024-11-05 10:36:04.970270] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.012 [2024-11-05 10:36:04.970314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.012 #21 NEW cov: 12346 ft: 13752 corp: 4/42b lim: 35 exec/s: 0 rss: 73Mb L: 9/16 MS: 5 ChangeByte-ChangeBit-ChangeBinInt-ChangeByte-CMP- DE: "\000\000\000\000\000\000\000\000"- 00:08:39.012 [2024-11-05 10:36:05.041193] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.012 [2024-11-05 10:36:05.041234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.012 [2024-11-05 10:36:05.041339] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.012 [2024-11-05 10:36:05.041364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.012 [2024-11-05 10:36:05.041465] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000042 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.012 [2024-11-05 10:36:05.041488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:39.012 #22 NEW cov: 12431 ft: 14152 corp: 5/65b lim: 35 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:08:39.270 [2024-11-05 10:36:05.111523] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.270 [2024-11-05 10:36:05.111563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.270 [2024-11-05 10:36:05.111666] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.270 [2024-11-05 10:36:05.111689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.270 [2024-11-05 10:36:05.111801] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.270 [2024-11-05 10:36:05.111828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:39.270 #28 NEW cov: 12431 ft: 14229 corp: 6/89b lim: 35 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:39.270 [2024-11-05 10:36:05.171744] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.270 [2024-11-05 10:36:05.171790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.270 [2024-11-05 10:36:05.171894] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.270 [2024-11-05 10:36:05.171918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.270 [2024-11-05 10:36:05.172027] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.270 [2024-11-05 10:36:05.172052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:39.270 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:39.270 #29 NEW cov: 12454 ft: 14291 corp: 7/113b lim: 35 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 ChangeBinInt- 00:08:39.270 [2024-11-05 10:36:05.271671] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.270 [2024-11-05 10:36:05.271722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.270 [2024-11-05 10:36:05.271831] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.270 [2024-11-05 10:36:05.271853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.270 #30 NEW cov: 12454 ft: 14368 corp: 8/129b lim: 35 exec/s: 0 rss: 73Mb L: 16/24 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:39.270 [2024-11-05 10:36:05.332371] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.270 [2024-11-05 10:36:05.332415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.270 [2024-11-05 10:36:05.332530] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.270 [2024-11-05 10:36:05.332553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.271 [2024-11-05 10:36:05.332654] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.271 [2024-11-05 10:36:05.332679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:39.529 #31 NEW cov: 12454 ft: 14443 corp: 9/153b lim: 35 exec/s: 31 rss: 73Mb L: 24/24 MS: 1 ChangeByte- 00:08:39.529 [2024-11-05 10:36:05.432467] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.529 [2024-11-05 10:36:05.432509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.529 [2024-11-05 10:36:05.432616] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.529 [2024-11-05 10:36:05.432640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.529 #32 NEW cov: 12454 ft: 14522 corp: 10/169b lim: 35 exec/s: 32 rss: 73Mb L: 16/24 MS: 1 ChangeBinInt- 00:08:39.529 [2024-11-05 10:36:05.522780] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.529 [2024-11-05 10:36:05.522829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.529 [2024-11-05 10:36:05.522930] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.529 [2024-11-05 10:36:05.522954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.529 #33 NEW cov: 12454 ft: 14659 corp: 11/185b lim: 35 exec/s: 33 rss: 74Mb L: 16/24 MS: 1 ChangeBinInt- 00:08:39.787 [2024-11-05 10:36:05.612766] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.787 [2024-11-05 10:36:05.612804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.787 #34 NEW cov: 12454 ft: 14744 corp: 12/195b lim: 35 exec/s: 34 rss: 74Mb L: 10/24 MS: 1 InsertByte- 00:08:39.787 [2024-11-05 10:36:05.703506] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.787 [2024-11-05 10:36:05.703543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.787 [2024-11-05 10:36:05.703658] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.787 [2024-11-05 10:36:05.703685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.787 #35 NEW cov: 12454 ft: 14767 corp: 13/211b lim: 35 exec/s: 35 rss: 74Mb L: 16/24 MS: 1 ChangeBit- 00:08:39.787 [2024-11-05 10:36:05.793814] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.787 [2024-11-05 10:36:05.793852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.787 [2024-11-05 10:36:05.793959] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.787 [2024-11-05 10:36:05.793983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.787 #36 NEW cov: 12454 ft: 14794 corp: 14/227b lim: 35 exec/s: 36 rss: 74Mb L: 16/24 MS: 1 ShuffleBytes- 00:08:39.787 [2024-11-05 10:36:05.854465] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.787 [2024-11-05 10:36:05.854505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.787 [2024-11-05 10:36:05.854606] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.787 [2024-11-05 10:36:05.854626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.787 [2024-11-05 10:36:05.854742] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.787 [2024-11-05 10:36:05.854764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.046 #37 NEW cov: 12454 ft: 14825 corp: 15/252b lim: 35 exec/s: 37 rss: 74Mb L: 25/25 MS: 1 InsertByte- 00:08:40.046 [2024-11-05 10:36:05.914334] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.046 [2024-11-05 10:36:05.914370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.046 [2024-11-05 10:36:05.914471] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.046 [2024-11-05 10:36:05.914497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.046 #38 NEW cov: 12454 ft: 14855 corp: 16/268b lim: 35 exec/s: 38 rss: 74Mb L: 16/25 MS: 1 ChangeBit- 00:08:40.046 [2024-11-05 10:36:06.005404] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.046 [2024-11-05 10:36:06.005444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.046 [2024-11-05 10:36:06.005559] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.046 [2024-11-05 10:36:06.005580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.046 [2024-11-05 10:36:06.005689] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.046 [2024-11-05 10:36:06.005718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.046 [2024-11-05 10:36:06.005825] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000042 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.046 [2024-11-05 10:36:06.005847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:40.046 #39 NEW cov: 12454 ft: 15175 corp: 17/299b lim: 35 exec/s: 39 rss: 74Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:08:40.046 [2024-11-05 10:36:06.105438] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.046 [2024-11-05 10:36:06.105476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.046 [2024-11-05 10:36:06.105584] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.046 [2024-11-05 10:36:06.105605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.046 [2024-11-05 10:36:06.105711] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.046 [2024-11-05 10:36:06.105741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.304 #40 NEW cov: 12454 ft: 15181 corp: 18/323b lim: 35 exec/s: 40 rss: 74Mb L: 24/31 MS: 1 ChangeByte- 00:08:40.304 [2024-11-05 10:36:06.165224] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.304 [2024-11-05 10:36:06.165261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.304 [2024-11-05 10:36:06.165371] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.304 [2024-11-05 10:36:06.165398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.304 #41 NEW cov: 12454 ft: 15219 corp: 19/339b lim: 35 exec/s: 41 rss: 74Mb L: 16/31 MS: 1 CopyPart- 00:08:40.304 [2024-11-05 10:36:06.255596] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.304 [2024-11-05 10:36:06.255631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.304 [2024-11-05 10:36:06.255749] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.304 [2024-11-05 10:36:06.255771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.304 #42 NEW cov: 12454 ft: 15227 corp: 20/355b lim: 35 exec/s: 42 rss: 74Mb L: 16/31 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:40.304 [2024-11-05 10:36:06.345564] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.304 [2024-11-05 10:36:06.345605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.304 #43 NEW cov: 12454 ft: 15248 corp: 21/364b lim: 35 exec/s: 21 rss: 74Mb L: 9/31 MS: 1 ChangeByte- 00:08:40.304 #43 DONE cov: 12454 ft: 15248 corp: 21/364b lim: 35 exec/s: 21 rss: 74Mb 00:08:40.304 ###### Recommended dictionary. ###### 00:08:40.304 "\000\000\000\000\000\000\000\000" # Uses: 3 00:08:40.304 ###### End of recommended dictionary. ###### 00:08:40.304 Done 43 runs in 2 second(s) 00:08:40.563 10:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:08:40.563 10:36:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:40.563 10:36:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:40.563 10:36:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:08:40.563 10:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:08:40.563 10:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:40.563 10:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:40.563 10:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:40.563 10:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:08:40.563 10:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:40.563 10:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:40.563 10:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:08:40.563 10:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:08:40.563 10:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:40.563 10:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:08:40.563 10:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:40.563 10:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:40.563 10:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:40.563 10:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:08:40.563 [2024-11-05 10:36:06.542407] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:40.563 [2024-11-05 10:36:06.542493] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2866084 ] 00:08:40.822 [2024-11-05 10:36:06.816118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.822 [2024-11-05 10:36:06.864230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.080 [2024-11-05 10:36:06.928171] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.080 [2024-11-05 10:36:06.944406] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:08:41.080 INFO: Running with entropic power schedule (0xFF, 100). 00:08:41.080 INFO: Seed: 2773126106 00:08:41.080 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:41.080 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:41.080 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:41.080 INFO: A corpus is not provided, starting from an empty corpus 00:08:41.080 #2 INITED exec/s: 0 rss: 66Mb 00:08:41.080 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:41.080 This may also happen if the target rejected all inputs we tried so far 00:08:41.080 [2024-11-05 10:36:06.990057] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005ad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.080 [2024-11-05 10:36:06.990086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.338 NEW_FUNC[1/715]: 0x451248 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:08:41.338 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:41.338 #4 NEW cov: 12208 ft: 12188 corp: 2/12b lim: 35 exec/s: 0 rss: 73Mb L: 11/11 MS: 2 ChangeByte-InsertRepeatedBytes- 00:08:41.338 [2024-11-05 10:36:07.311277] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000529 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.338 [2024-11-05 10:36:07.311315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.338 [2024-11-05 10:36:07.311376] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.338 [2024-11-05 10:36:07.311391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.339 [2024-11-05 10:36:07.311448] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.339 [2024-11-05 10:36:07.311462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.339 [2024-11-05 10:36:07.311520] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.339 [2024-11-05 10:36:07.311533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.339 #9 NEW cov: 12321 ft: 13430 corp: 3/40b lim: 35 exec/s: 0 rss: 73Mb L: 28/28 MS: 5 InsertByte-CopyPart-CopyPart-InsertByte-InsertRepeatedBytes- 00:08:41.339 [2024-11-05 10:36:07.361388] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000529 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.339 [2024-11-05 10:36:07.361419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.339 [2024-11-05 10:36:07.361483] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.339 [2024-11-05 10:36:07.361499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.339 [2024-11-05 10:36:07.361557] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000002a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.339 [2024-11-05 10:36:07.361572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.339 [2024-11-05 10:36:07.361631] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.339 [2024-11-05 10:36:07.361645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.339 #10 NEW cov: 12327 ft: 13719 corp: 4/68b lim: 35 exec/s: 0 rss: 73Mb L: 28/28 MS: 1 ChangeByte- 00:08:41.597 [2024-11-05 10:36:07.421072] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000151 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.597 [2024-11-05 10:36:07.421104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.597 #15 NEW cov: 12412 ft: 14073 corp: 5/75b lim: 35 exec/s: 0 rss: 74Mb L: 7/28 MS: 5 ChangeByte-InsertRepeatedBytes-ChangeByte-ChangeBit-InsertByte- 00:08:41.597 [2024-11-05 10:36:07.461153] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000052a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.597 [2024-11-05 10:36:07.461179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.597 #16 NEW cov: 12412 ft: 14131 corp: 6/87b lim: 35 exec/s: 0 rss: 74Mb L: 12/28 MS: 1 InsertByte- 00:08:41.597 [2024-11-05 10:36:07.521350] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000528 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.597 [2024-11-05 10:36:07.521376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.597 #17 NEW cov: 12412 ft: 14161 corp: 7/99b lim: 35 exec/s: 0 rss: 74Mb L: 12/28 MS: 1 ChangeBinInt- 00:08:41.597 [2024-11-05 10:36:07.581501] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.597 [2024-11-05 10:36:07.581527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.597 #22 NEW cov: 12412 ft: 14250 corp: 8/107b lim: 35 exec/s: 0 rss: 74Mb L: 8/28 MS: 5 InsertByte-InsertByte-EraseBytes-ChangeBinInt-CrossOver- 00:08:41.597 NEW_FUNC[1/1]: 0x46a6e8 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:08:41.597 #23 NEW cov: 12450 ft: 14328 corp: 9/119b lim: 35 exec/s: 0 rss: 74Mb L: 12/28 MS: 1 CMP- DE: "\001:q\313>\307\037R"- 00:08:41.856 [2024-11-05 10:36:07.681790] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.856 [2024-11-05 10:36:07.681816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.856 #24 NEW cov: 12450 ft: 14341 corp: 10/132b lim: 35 exec/s: 0 rss: 74Mb L: 13/28 MS: 1 InsertRepeatedBytes- 00:08:41.856 [2024-11-05 10:36:07.721925] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.856 [2024-11-05 10:36:07.721951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.856 #26 NEW cov: 12450 ft: 14369 corp: 11/139b lim: 35 exec/s: 0 rss: 74Mb L: 7/28 MS: 2 EraseBytes-CopyPart- 00:08:41.856 [2024-11-05 10:36:07.782347] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005ad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.856 [2024-11-05 10:36:07.782373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.856 [2024-11-05 10:36:07.782432] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.856 [2024-11-05 10:36:07.782446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.856 [2024-11-05 10:36:07.782505] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.856 [2024-11-05 10:36:07.782519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.856 #27 NEW cov: 12450 ft: 14628 corp: 12/164b lim: 35 exec/s: 0 rss: 74Mb L: 25/28 MS: 1 InsertRepeatedBytes- 00:08:41.856 [2024-11-05 10:36:07.822336] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000529 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.856 [2024-11-05 10:36:07.822365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.856 [2024-11-05 10:36:07.822425] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.856 [2024-11-05 10:36:07.822440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.856 #28 NEW cov: 12450 ft: 14821 corp: 13/184b lim: 35 exec/s: 0 rss: 74Mb L: 20/28 MS: 1 EraseBytes- 00:08:41.856 NEW_FUNC[1/2]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:08:41.856 NEW_FUNC[2/2]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:41.856 #29 NEW cov: 12487 ft: 14917 corp: 14/192b lim: 35 exec/s: 0 rss: 74Mb L: 8/28 MS: 1 InsertRepeatedBytes- 00:08:41.856 [2024-11-05 10:36:07.902424] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.856 [2024-11-05 10:36:07.902452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.114 #30 NEW cov: 12487 ft: 14935 corp: 15/205b lim: 35 exec/s: 0 rss: 74Mb L: 13/28 MS: 1 ShuffleBytes- 00:08:42.114 #31 NEW cov: 12487 ft: 14952 corp: 16/216b lim: 35 exec/s: 31 rss: 74Mb L: 11/28 MS: 1 PersAutoDict- DE: "\001:q\313>\307\037R"- 00:08:42.114 [2024-11-05 10:36:08.003153] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000529 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.114 [2024-11-05 10:36:08.003181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.114 [2024-11-05 10:36:08.003244] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.114 [2024-11-05 10:36:08.003259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.114 [2024-11-05 10:36:08.003379] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.114 [2024-11-05 10:36:08.003396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.114 #32 NEW cov: 12487 ft: 14993 corp: 17/248b lim: 35 exec/s: 32 rss: 74Mb L: 32/32 MS: 1 CMP- DE: "\377\377\001\000"- 00:08:42.114 #38 NEW cov: 12487 ft: 15035 corp: 18/260b lim: 35 exec/s: 38 rss: 74Mb L: 12/32 MS: 1 InsertByte- 00:08:42.114 [2024-11-05 10:36:08.123522] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000529 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.114 [2024-11-05 10:36:08.123550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.114 [2024-11-05 10:36:08.123611] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000049e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.114 [2024-11-05 10:36:08.123625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.114 [2024-11-05 10:36:08.123685] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.114 [2024-11-05 10:36:08.123700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.114 [2024-11-05 10:36:08.123752] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.114 [2024-11-05 10:36:08.123766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.114 #39 NEW cov: 12487 ft: 15041 corp: 19/291b lim: 35 exec/s: 39 rss: 74Mb L: 31/32 MS: 1 InsertRepeatedBytes- 00:08:42.114 [2024-11-05 10:36:08.163500] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005ad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.114 [2024-11-05 10:36:08.163530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.115 [2024-11-05 10:36:08.163591] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.115 [2024-11-05 10:36:08.163604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.115 [2024-11-05 10:36:08.163661] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.115 [2024-11-05 10:36:08.163675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.373 #40 NEW cov: 12487 ft: 15071 corp: 20/316b lim: 35 exec/s: 40 rss: 74Mb L: 25/32 MS: 1 ChangeBit- 00:08:42.373 #41 NEW cov: 12487 ft: 15088 corp: 21/327b lim: 35 exec/s: 41 rss: 74Mb L: 11/32 MS: 1 ShuffleBytes- 00:08:42.373 [2024-11-05 10:36:08.263931] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000529 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.373 [2024-11-05 10:36:08.263959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.373 [2024-11-05 10:36:08.264019] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.373 [2024-11-05 10:36:08.264035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.373 [2024-11-05 10:36:08.264092] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000002a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.373 [2024-11-05 10:36:08.264108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.373 [2024-11-05 10:36:08.264164] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.373 [2024-11-05 10:36:08.264178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.373 #42 NEW cov: 12487 ft: 15096 corp: 22/355b lim: 35 exec/s: 42 rss: 74Mb L: 28/32 MS: 1 ShuffleBytes- 00:08:42.373 [2024-11-05 10:36:08.303620] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.373 [2024-11-05 10:36:08.303647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.373 #43 NEW cov: 12487 ft: 15125 corp: 23/363b lim: 35 exec/s: 43 rss: 74Mb L: 8/32 MS: 1 InsertByte- 00:08:42.373 [2024-11-05 10:36:08.363808] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000060 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.373 [2024-11-05 10:36:08.363835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.373 #44 NEW cov: 12487 ft: 15150 corp: 24/376b lim: 35 exec/s: 44 rss: 74Mb L: 13/32 MS: 1 ChangeByte- 00:08:42.373 [2024-11-05 10:36:08.404195] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000001ad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.374 [2024-11-05 10:36:08.404222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.374 [2024-11-05 10:36:08.404284] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.374 [2024-11-05 10:36:08.404299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.374 [2024-11-05 10:36:08.404360] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.374 [2024-11-05 10:36:08.404378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.374 #45 NEW cov: 12487 ft: 15179 corp: 25/402b lim: 35 exec/s: 45 rss: 75Mb L: 26/32 MS: 1 InsertByte- 00:08:42.632 [2024-11-05 10:36:08.464361] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005ad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.632 [2024-11-05 10:36:08.464388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.632 [2024-11-05 10:36:08.464448] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.632 [2024-11-05 10:36:08.464462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.632 [2024-11-05 10:36:08.464522] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.632 [2024-11-05 10:36:08.464537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.632 #46 NEW cov: 12487 ft: 15210 corp: 26/427b lim: 35 exec/s: 46 rss: 75Mb L: 25/32 MS: 1 ChangeBit- 00:08:42.632 [2024-11-05 10:36:08.504175] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.632 [2024-11-05 10:36:08.504201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.632 #47 NEW cov: 12487 ft: 15237 corp: 27/440b lim: 35 exec/s: 47 rss: 75Mb L: 13/32 MS: 1 ChangeBit- 00:08:42.632 [2024-11-05 10:36:08.564332] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000228 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.632 [2024-11-05 10:36:08.564359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.632 #48 NEW cov: 12487 ft: 15244 corp: 28/452b lim: 35 exec/s: 48 rss: 75Mb L: 12/32 MS: 1 CMP- DE: "@\000\000\000\000\000\000\000"- 00:08:42.632 [2024-11-05 10:36:08.604602] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.632 [2024-11-05 10:36:08.604629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.632 [2024-11-05 10:36:08.604688] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.632 [2024-11-05 10:36:08.604703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.632 #50 NEW cov: 12487 ft: 15257 corp: 29/469b lim: 35 exec/s: 50 rss: 75Mb L: 17/32 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:42.632 [2024-11-05 10:36:08.644776] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000052 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.632 [2024-11-05 10:36:08.644803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.632 #51 NEW cov: 12487 ft: 15262 corp: 30/489b lim: 35 exec/s: 51 rss: 75Mb L: 20/32 MS: 1 PersAutoDict- DE: "\001:q\313>\307\037R"- 00:08:42.890 #52 NEW cov: 12487 ft: 15302 corp: 31/497b lim: 35 exec/s: 52 rss: 75Mb L: 8/32 MS: 1 CopyPart- 00:08:42.890 [2024-11-05 10:36:08.765230] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005ad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.890 [2024-11-05 10:36:08.765258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.890 [2024-11-05 10:36:08.765317] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.890 [2024-11-05 10:36:08.765336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.890 [2024-11-05 10:36:08.765394] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.890 [2024-11-05 10:36:08.765425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.890 #53 NEW cov: 12487 ft: 15328 corp: 32/522b lim: 35 exec/s: 53 rss: 75Mb L: 25/32 MS: 1 ChangeBit- 00:08:42.890 [2024-11-05 10:36:08.825123] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005ad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.890 [2024-11-05 10:36:08.825150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.890 #54 NEW cov: 12487 ft: 15368 corp: 33/533b lim: 35 exec/s: 54 rss: 75Mb L: 11/32 MS: 1 ShuffleBytes- 00:08:42.890 #55 NEW cov: 12487 ft: 15388 corp: 34/544b lim: 35 exec/s: 55 rss: 75Mb L: 11/32 MS: 1 PersAutoDict- DE: "\001:q\313>\307\037R"- 00:08:42.890 [2024-11-05 10:36:08.925801] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000529 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.890 [2024-11-05 10:36:08.925827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.890 [2024-11-05 10:36:08.925886] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.890 [2024-11-05 10:36:08.925900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.890 [2024-11-05 10:36:08.925958] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000002a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.890 [2024-11-05 10:36:08.925972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.890 [2024-11-05 10:36:08.926030] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.890 [2024-11-05 10:36:08.926043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.890 #56 NEW cov: 12487 ft: 15401 corp: 35/572b lim: 35 exec/s: 56 rss: 75Mb L: 28/32 MS: 1 ShuffleBytes- 00:08:42.890 [2024-11-05 10:36:08.965489] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.890 [2024-11-05 10:36:08.965516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.148 #57 NEW cov: 12487 ft: 15412 corp: 36/583b lim: 35 exec/s: 28 rss: 75Mb L: 11/32 MS: 1 EraseBytes- 00:08:43.148 #57 DONE cov: 12487 ft: 15412 corp: 36/583b lim: 35 exec/s: 28 rss: 75Mb 00:08:43.148 ###### Recommended dictionary. ###### 00:08:43.148 "\001:q\313>\307\037R" # Uses: 3 00:08:43.148 "\377\377\001\000" # Uses: 1 00:08:43.148 "@\000\000\000\000\000\000\000" # Uses: 0 00:08:43.148 ###### End of recommended dictionary. ###### 00:08:43.148 Done 57 runs in 2 second(s) 00:08:43.148 10:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:08:43.148 10:36:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:43.148 10:36:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:43.148 10:36:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:08:43.148 10:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:08:43.148 10:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:43.148 10:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:43.148 10:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:43.148 10:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:08:43.148 10:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:43.148 10:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:43.148 10:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:08:43.148 10:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:08:43.148 10:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:43.148 10:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:08:43.148 10:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:43.148 10:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:43.148 10:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:43.148 10:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:08:43.148 [2024-11-05 10:36:09.160057] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:43.148 [2024-11-05 10:36:09.160129] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2866432 ] 00:08:43.407 [2024-11-05 10:36:09.439700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.665 [2024-11-05 10:36:09.487606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.665 [2024-11-05 10:36:09.551590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.665 [2024-11-05 10:36:09.567832] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:08:43.665 INFO: Running with entropic power schedule (0xFF, 100). 00:08:43.665 INFO: Seed: 1100160178 00:08:43.665 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:43.665 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:43.665 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:43.665 INFO: A corpus is not provided, starting from an empty corpus 00:08:43.665 #2 INITED exec/s: 0 rss: 66Mb 00:08:43.665 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:43.665 This may also happen if the target rejected all inputs we tried so far 00:08:43.665 [2024-11-05 10:36:09.613333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.665 [2024-11-05 10:36:09.613363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.923 NEW_FUNC[1/716]: 0x452708 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:08:43.923 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:43.923 #42 NEW cov: 12312 ft: 12304 corp: 2/32b lim: 105 exec/s: 0 rss: 73Mb L: 31/31 MS: 5 CopyPart-EraseBytes-CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:08:43.923 [2024-11-05 10:36:09.934192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.923 [2024-11-05 10:36:09.934233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.923 #48 NEW cov: 12425 ft: 12865 corp: 3/63b lim: 105 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 ChangeBit- 00:08:43.923 [2024-11-05 10:36:09.994253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.923 [2024-11-05 10:36:09.994284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.181 #54 NEW cov: 12431 ft: 13047 corp: 4/94b lim: 105 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 CrossOver- 00:08:44.181 [2024-11-05 10:36:10.054461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583732735 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.181 [2024-11-05 10:36:10.054498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.181 #55 NEW cov: 12516 ft: 13332 corp: 5/125b lim: 105 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 ChangeBinInt- 00:08:44.181 [2024-11-05 10:36:10.094488] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:14962 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.181 [2024-11-05 10:36:10.094521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.181 #56 NEW cov: 12516 ft: 13502 corp: 6/156b lim: 105 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 CMP- DE: "\001:q\321\207\005\230n"- 00:08:44.181 [2024-11-05 10:36:10.134618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:315 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.181 [2024-11-05 10:36:10.134646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.181 #57 NEW cov: 12516 ft: 13628 corp: 7/188b lim: 105 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 InsertByte- 00:08:44.181 [2024-11-05 10:36:10.194807] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583732735 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.181 [2024-11-05 10:36:10.194835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.181 #58 NEW cov: 12516 ft: 13764 corp: 8/219b lim: 105 exec/s: 0 rss: 73Mb L: 31/32 MS: 1 PersAutoDict- DE: "\001:q\321\207\005\230n"- 00:08:44.181 [2024-11-05 10:36:10.254945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:14962 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.181 [2024-11-05 10:36:10.254972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.440 #59 NEW cov: 12516 ft: 13833 corp: 9/250b lim: 105 exec/s: 0 rss: 73Mb L: 31/32 MS: 1 ShuffleBytes- 00:08:44.440 [2024-11-05 10:36:10.295061] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.440 [2024-11-05 10:36:10.295089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.440 #60 NEW cov: 12516 ft: 13866 corp: 10/273b lim: 105 exec/s: 0 rss: 73Mb L: 23/32 MS: 1 EraseBytes- 00:08:44.440 [2024-11-05 10:36:10.355234] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.440 [2024-11-05 10:36:10.355261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.440 #61 NEW cov: 12516 ft: 13942 corp: 11/304b lim: 105 exec/s: 0 rss: 73Mb L: 31/32 MS: 1 ChangeBinInt- 00:08:44.440 [2024-11-05 10:36:10.395313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583732735 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.440 [2024-11-05 10:36:10.395340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.440 #67 NEW cov: 12516 ft: 13978 corp: 12/335b lim: 105 exec/s: 0 rss: 73Mb L: 31/32 MS: 1 CopyPart- 00:08:44.440 [2024-11-05 10:36:10.455482] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583077185 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.440 [2024-11-05 10:36:10.455510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.440 #68 NEW cov: 12516 ft: 14026 corp: 13/367b lim: 105 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 InsertByte- 00:08:44.440 [2024-11-05 10:36:10.495596] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:315 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.440 [2024-11-05 10:36:10.495625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.698 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:44.698 #69 NEW cov: 12539 ft: 14057 corp: 14/400b lim: 105 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 InsertByte- 00:08:44.698 [2024-11-05 10:36:10.555823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583732735 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.698 [2024-11-05 10:36:10.555853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.698 #70 NEW cov: 12539 ft: 14092 corp: 15/431b lim: 105 exec/s: 0 rss: 74Mb L: 31/33 MS: 1 ShuffleBytes- 00:08:44.698 [2024-11-05 10:36:10.595896] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446468092164505599 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.698 [2024-11-05 10:36:10.595926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.698 #76 NEW cov: 12539 ft: 14231 corp: 16/454b lim: 105 exec/s: 76 rss: 74Mb L: 23/33 MS: 1 ChangeByte- 00:08:44.698 [2024-11-05 10:36:10.656097] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18388760224380682239 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.698 [2024-11-05 10:36:10.656127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.698 #77 NEW cov: 12539 ft: 14245 corp: 17/477b lim: 105 exec/s: 77 rss: 74Mb L: 23/33 MS: 1 ChangeByte- 00:08:44.698 [2024-11-05 10:36:10.696136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583732735 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.698 [2024-11-05 10:36:10.696165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.698 #78 NEW cov: 12539 ft: 14259 corp: 18/508b lim: 105 exec/s: 78 rss: 74Mb L: 31/33 MS: 1 ChangeBit- 00:08:44.698 [2024-11-05 10:36:10.756358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.698 [2024-11-05 10:36:10.756387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.957 #79 NEW cov: 12539 ft: 14274 corp: 19/549b lim: 105 exec/s: 79 rss: 74Mb L: 41/41 MS: 1 InsertRepeatedBytes- 00:08:44.957 [2024-11-05 10:36:10.796462] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2305843005087219711 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.957 [2024-11-05 10:36:10.796489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.957 #80 NEW cov: 12539 ft: 14282 corp: 20/580b lim: 105 exec/s: 80 rss: 74Mb L: 31/41 MS: 1 ChangeBinInt- 00:08:44.957 [2024-11-05 10:36:10.836565] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583732735 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.957 [2024-11-05 10:36:10.836592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.957 #81 NEW cov: 12539 ft: 14342 corp: 21/611b lim: 105 exec/s: 81 rss: 74Mb L: 31/41 MS: 1 ShuffleBytes- 00:08:44.957 [2024-11-05 10:36:10.896883] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583732735 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.957 [2024-11-05 10:36:10.896912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.957 [2024-11-05 10:36:10.896973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.957 [2024-11-05 10:36:10.896986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:44.957 #82 NEW cov: 12539 ft: 14843 corp: 22/658b lim: 105 exec/s: 82 rss: 74Mb L: 47/47 MS: 1 CopyPart- 00:08:44.957 [2024-11-05 10:36:10.936835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583732735 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.957 [2024-11-05 10:36:10.936875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.957 #83 NEW cov: 12539 ft: 14893 corp: 23/689b lim: 105 exec/s: 83 rss: 74Mb L: 31/47 MS: 1 ChangeByte- 00:08:44.957 [2024-11-05 10:36:10.976953] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.957 [2024-11-05 10:36:10.976984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.957 #88 NEW cov: 12539 ft: 14934 corp: 24/723b lim: 105 exec/s: 88 rss: 74Mb L: 34/47 MS: 5 ChangeBit-InsertByte-ShuffleBytes-CopyPart-CrossOver- 00:08:44.957 [2024-11-05 10:36:11.017111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18388760224380682239 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.957 [2024-11-05 10:36:11.017140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.215 #90 NEW cov: 12539 ft: 14942 corp: 25/751b lim: 105 exec/s: 90 rss: 74Mb L: 28/47 MS: 2 EraseBytes-PersAutoDict- DE: "\001:q\321\207\005\230n"- 00:08:45.215 [2024-11-05 10:36:11.077261] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18388760224397459455 len:65472 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.215 [2024-11-05 10:36:11.077289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.215 #94 NEW cov: 12539 ft: 14989 corp: 26/789b lim: 105 exec/s: 94 rss: 74Mb L: 38/47 MS: 4 EraseBytes-ChangeBit-InsertByte-InsertRepeatedBytes- 00:08:45.215 [2024-11-05 10:36:11.117506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18388760224380682239 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.215 [2024-11-05 10:36:11.117536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.215 [2024-11-05 10:36:11.117598] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.215 [2024-11-05 10:36:11.117615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.215 #95 NEW cov: 12539 ft: 14998 corp: 27/838b lim: 105 exec/s: 95 rss: 74Mb L: 49/49 MS: 1 InsertRepeatedBytes- 00:08:45.215 [2024-11-05 10:36:11.157478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446512081235607551 len:53563 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.215 [2024-11-05 10:36:11.157505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.215 #96 NEW cov: 12539 ft: 15010 corp: 28/871b lim: 105 exec/s: 96 rss: 74Mb L: 33/49 MS: 1 CopyPart- 00:08:45.215 [2024-11-05 10:36:11.218172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18388760224397459455 len:65472 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.215 [2024-11-05 10:36:11.218200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.215 [2024-11-05 10:36:11.218252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13310591802206107832 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.215 [2024-11-05 10:36:11.218270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.215 [2024-11-05 10:36:11.218322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.216 [2024-11-05 10:36:11.218340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.216 [2024-11-05 10:36:11.218399] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.216 [2024-11-05 10:36:11.218416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:45.216 #97 NEW cov: 12539 ft: 15585 corp: 29/961b lim: 105 exec/s: 97 rss: 74Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:08:45.216 [2024-11-05 10:36:11.277880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6557240778447716351 len:47289 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.216 [2024-11-05 10:36:11.277908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.474 #108 NEW cov: 12539 ft: 15605 corp: 30/992b lim: 105 exec/s: 108 rss: 74Mb L: 31/90 MS: 1 CrossOver- 00:08:45.474 [2024-11-05 10:36:11.338056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.474 [2024-11-05 10:36:11.338085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.474 #109 NEW cov: 12539 ft: 15633 corp: 31/1033b lim: 105 exec/s: 109 rss: 74Mb L: 41/90 MS: 1 ChangeBit- 00:08:45.474 [2024-11-05 10:36:11.398520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583077185 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.474 [2024-11-05 10:36:11.398550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.474 [2024-11-05 10:36:11.398604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18377524319182913535 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.474 [2024-11-05 10:36:11.398619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.474 [2024-11-05 10:36:11.398679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446463951647539199 len:34566 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.474 [2024-11-05 10:36:11.398697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.474 #110 NEW cov: 12539 ft: 15917 corp: 32/1096b lim: 105 exec/s: 110 rss: 74Mb L: 63/90 MS: 1 CrossOver- 00:08:45.474 [2024-11-05 10:36:11.458421] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446468092164505599 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.474 [2024-11-05 10:36:11.458450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.474 #111 NEW cov: 12539 ft: 15921 corp: 33/1119b lim: 105 exec/s: 111 rss: 75Mb L: 23/90 MS: 1 ChangeByte- 00:08:45.474 [2024-11-05 10:36:11.518563] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583732603 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.474 [2024-11-05 10:36:11.518593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.732 #112 NEW cov: 12539 ft: 15932 corp: 34/1150b lim: 105 exec/s: 112 rss: 75Mb L: 31/90 MS: 1 ChangeByte- 00:08:45.732 [2024-11-05 10:36:11.578722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583732735 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.733 [2024-11-05 10:36:11.578750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.733 #113 NEW cov: 12539 ft: 15939 corp: 35/1182b lim: 105 exec/s: 56 rss: 75Mb L: 32/90 MS: 1 InsertByte- 00:08:45.733 #113 DONE cov: 12539 ft: 15939 corp: 35/1182b lim: 105 exec/s: 56 rss: 75Mb 00:08:45.733 ###### Recommended dictionary. ###### 00:08:45.733 "\001:q\321\207\005\230n" # Uses: 2 00:08:45.733 ###### End of recommended dictionary. ###### 00:08:45.733 Done 113 runs in 2 second(s) 00:08:45.733 10:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:08:45.733 10:36:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:45.733 10:36:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:45.733 10:36:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:08:45.733 10:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:08:45.733 10:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:45.733 10:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:45.733 10:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:45.733 10:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:08:45.733 10:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:45.733 10:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:45.733 10:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:08:45.733 10:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:08:45.733 10:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:45.733 10:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:08:45.733 10:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:45.733 10:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:45.733 10:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:45.733 10:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:08:45.733 [2024-11-05 10:36:11.738596] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:45.733 [2024-11-05 10:36:11.738651] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2866788 ] 00:08:45.991 [2024-11-05 10:36:11.980656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.991 [2024-11-05 10:36:12.030502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.249 [2024-11-05 10:36:12.094522] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.249 [2024-11-05 10:36:12.110755] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:08:46.249 INFO: Running with entropic power schedule (0xFF, 100). 00:08:46.249 INFO: Seed: 3644162980 00:08:46.249 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:46.249 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:46.249 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:46.249 INFO: A corpus is not provided, starting from an empty corpus 00:08:46.249 #2 INITED exec/s: 0 rss: 66Mb 00:08:46.249 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:46.249 This may also happen if the target rejected all inputs we tried so far 00:08:46.249 [2024-11-05 10:36:12.156940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.249 [2024-11-05 10:36:12.156973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.249 [2024-11-05 10:36:12.157031] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.249 [2024-11-05 10:36:12.157045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:46.249 [2024-11-05 10:36:12.157102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.249 [2024-11-05 10:36:12.157120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:46.249 [2024-11-05 10:36:12.157178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.249 [2024-11-05 10:36:12.157195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:46.508 NEW_FUNC[1/717]: 0x455a88 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:08:46.508 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:46.508 #11 NEW cov: 12333 ft: 12329 corp: 2/117b lim: 120 exec/s: 0 rss: 73Mb L: 116/116 MS: 4 CopyPart-ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:08:46.508 [2024-11-05 10:36:12.477681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.508 [2024-11-05 10:36:12.477719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.508 [2024-11-05 10:36:12.477794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18442240474082181119 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.508 [2024-11-05 10:36:12.477807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:46.508 [2024-11-05 10:36:12.477864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.508 [2024-11-05 10:36:12.477881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:46.508 [2024-11-05 10:36:12.477935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.508 [2024-11-05 10:36:12.477951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:46.508 #12 NEW cov: 12446 ft: 12825 corp: 3/233b lim: 120 exec/s: 0 rss: 73Mb L: 116/116 MS: 1 ChangeBit- 00:08:46.508 [2024-11-05 10:36:12.537783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.508 [2024-11-05 10:36:12.537817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.508 [2024-11-05 10:36:12.537871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18442240474082181119 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.508 [2024-11-05 10:36:12.537885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:46.508 [2024-11-05 10:36:12.537940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.508 [2024-11-05 10:36:12.537957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:46.508 [2024-11-05 10:36:12.538014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.508 [2024-11-05 10:36:12.538031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:46.508 #13 NEW cov: 12452 ft: 13174 corp: 4/349b lim: 120 exec/s: 0 rss: 74Mb L: 116/116 MS: 1 ShuffleBytes- 00:08:46.766 [2024-11-05 10:36:12.597364] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11285066962739960988 len:40093 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.766 [2024-11-05 10:36:12.597393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.766 #15 NEW cov: 12537 ft: 14348 corp: 5/393b lim: 120 exec/s: 0 rss: 74Mb L: 44/116 MS: 2 ChangeBit-InsertRepeatedBytes- 00:08:46.766 [2024-11-05 10:36:12.648063] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.766 [2024-11-05 10:36:12.648091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.766 [2024-11-05 10:36:12.648144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.766 [2024-11-05 10:36:12.648161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:46.766 [2024-11-05 10:36:12.648216] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.766 [2024-11-05 10:36:12.648233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:46.766 [2024-11-05 10:36:12.648288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744070236667903 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.766 [2024-11-05 10:36:12.648306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:46.766 #16 NEW cov: 12537 ft: 14429 corp: 6/509b lim: 120 exec/s: 0 rss: 74Mb L: 116/116 MS: 1 CMP- DE: "\000\000\177`X\016\3450"- 00:08:46.766 [2024-11-05 10:36:12.688206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.767 [2024-11-05 10:36:12.688236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.767 [2024-11-05 10:36:12.688294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.767 [2024-11-05 10:36:12.688309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:46.767 [2024-11-05 10:36:12.688369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.767 [2024-11-05 10:36:12.688386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:46.767 [2024-11-05 10:36:12.688442] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744069414584831 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.767 [2024-11-05 10:36:12.688460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:46.767 #22 NEW cov: 12537 ft: 14558 corp: 7/625b lim: 120 exec/s: 0 rss: 74Mb L: 116/116 MS: 1 ChangeBinInt- 00:08:46.767 [2024-11-05 10:36:12.728272] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.767 [2024-11-05 10:36:12.728301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.767 [2024-11-05 10:36:12.728372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.767 [2024-11-05 10:36:12.728386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:46.767 [2024-11-05 10:36:12.728440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.767 [2024-11-05 10:36:12.728457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:46.767 [2024-11-05 10:36:12.728513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744070236667903 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.767 [2024-11-05 10:36:12.728531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:46.767 #23 NEW cov: 12537 ft: 14652 corp: 8/741b lim: 120 exec/s: 0 rss: 74Mb L: 116/116 MS: 1 ChangeBinInt- 00:08:46.767 [2024-11-05 10:36:12.788446] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.767 [2024-11-05 10:36:12.788475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.767 [2024-11-05 10:36:12.788525] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18442240474082181119 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.767 [2024-11-05 10:36:12.788542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:46.767 [2024-11-05 10:36:12.788589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.767 [2024-11-05 10:36:12.788606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:46.767 [2024-11-05 10:36:12.788665] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.767 [2024-11-05 10:36:12.788681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:46.767 #24 NEW cov: 12537 ft: 14691 corp: 9/857b lim: 120 exec/s: 0 rss: 74Mb L: 116/116 MS: 1 CopyPart- 00:08:47.025 [2024-11-05 10:36:12.848683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.026 [2024-11-05 10:36:12.848717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.026 [2024-11-05 10:36:12.848777] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18442240474082181119 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.026 [2024-11-05 10:36:12.848791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.026 [2024-11-05 10:36:12.848846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.026 [2024-11-05 10:36:12.848863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.026 [2024-11-05 10:36:12.848920] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.026 [2024-11-05 10:36:12.848938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:47.026 #25 NEW cov: 12537 ft: 14755 corp: 10/976b lim: 120 exec/s: 0 rss: 74Mb L: 119/119 MS: 1 InsertRepeatedBytes- 00:08:47.026 [2024-11-05 10:36:12.888747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11285066962739960988 len:40093 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.026 [2024-11-05 10:36:12.888775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.026 [2024-11-05 10:36:12.888826] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.026 [2024-11-05 10:36:12.888844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.026 [2024-11-05 10:36:12.888878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.026 [2024-11-05 10:36:12.888896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.026 [2024-11-05 10:36:12.888951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.026 [2024-11-05 10:36:12.888968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:47.026 #26 NEW cov: 12537 ft: 14808 corp: 11/1085b lim: 120 exec/s: 0 rss: 74Mb L: 109/119 MS: 1 InsertRepeatedBytes- 00:08:47.026 [2024-11-05 10:36:12.948928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11285066962739960988 len:40093 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.026 [2024-11-05 10:36:12.948957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.026 [2024-11-05 10:36:12.949005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.026 [2024-11-05 10:36:12.949022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.026 [2024-11-05 10:36:12.949060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.026 [2024-11-05 10:36:12.949075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.026 [2024-11-05 10:36:12.949130] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446742978492891135 len:24665 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.026 [2024-11-05 10:36:12.949147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:47.026 #27 NEW cov: 12537 ft: 14855 corp: 12/1202b lim: 120 exec/s: 0 rss: 74Mb L: 117/119 MS: 1 PersAutoDict- DE: "\000\000\177`X\016\3450"- 00:08:47.026 [2024-11-05 10:36:13.009125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.026 [2024-11-05 10:36:13.009153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.026 [2024-11-05 10:36:13.009203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.026 [2024-11-05 10:36:13.009219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.026 [2024-11-05 10:36:13.009260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.026 [2024-11-05 10:36:13.009278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.026 [2024-11-05 10:36:13.009335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744070236667903 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.026 [2024-11-05 10:36:13.009365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:47.026 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:47.026 #28 NEW cov: 12560 ft: 14909 corp: 13/1318b lim: 120 exec/s: 0 rss: 74Mb L: 116/119 MS: 1 ChangeASCIIInt- 00:08:47.026 [2024-11-05 10:36:13.069276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.026 [2024-11-05 10:36:13.069305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.026 [2024-11-05 10:36:13.069359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.026 [2024-11-05 10:36:13.069375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.026 [2024-11-05 10:36:13.069426] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.026 [2024-11-05 10:36:13.069443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.026 [2024-11-05 10:36:13.069499] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.026 [2024-11-05 10:36:13.069516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:47.026 #29 NEW cov: 12560 ft: 14965 corp: 14/1434b lim: 120 exec/s: 0 rss: 74Mb L: 116/119 MS: 1 CopyPart- 00:08:47.285 [2024-11-05 10:36:13.109459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.109487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.285 [2024-11-05 10:36:13.109533] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.109551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.285 [2024-11-05 10:36:13.109589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.109609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.285 [2024-11-05 10:36:13.109667] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.109683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:47.285 #30 NEW cov: 12560 ft: 15029 corp: 15/1550b lim: 120 exec/s: 0 rss: 74Mb L: 116/119 MS: 1 PersAutoDict- DE: "\000\000\177`X\016\3450"- 00:08:47.285 [2024-11-05 10:36:13.149546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.149574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.285 [2024-11-05 10:36:13.149621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18442240474082181119 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.149638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.285 [2024-11-05 10:36:13.149677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.149694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.285 [2024-11-05 10:36:13.149763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.149788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:47.285 #31 NEW cov: 12560 ft: 15053 corp: 16/1647b lim: 120 exec/s: 31 rss: 74Mb L: 97/119 MS: 1 EraseBytes- 00:08:47.285 [2024-11-05 10:36:13.209582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11285066962739960988 len:40093 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.209610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.285 [2024-11-05 10:36:13.209667] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.209681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.285 [2024-11-05 10:36:13.209732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.209750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.285 #32 NEW cov: 12560 ft: 15392 corp: 17/1729b lim: 120 exec/s: 32 rss: 74Mb L: 82/119 MS: 1 CrossOver- 00:08:47.285 [2024-11-05 10:36:13.250030] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.250060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.285 [2024-11-05 10:36:13.250112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1012763458879356686 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.250129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.285 [2024-11-05 10:36:13.250175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.250195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.285 [2024-11-05 10:36:13.250249] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:3530822105179885285 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.250265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:47.285 [2024-11-05 10:36:13.250321] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.250337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:47.285 #33 NEW cov: 12560 ft: 15444 corp: 18/1849b lim: 120 exec/s: 33 rss: 74Mb L: 120/120 MS: 1 InsertRepeatedBytes- 00:08:47.285 [2024-11-05 10:36:13.289934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.289962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.285 [2024-11-05 10:36:13.290014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.290030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.285 [2024-11-05 10:36:13.290075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.290092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.285 [2024-11-05 10:36:13.290148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744069414584831 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.290165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:47.285 #34 NEW cov: 12560 ft: 15476 corp: 19/1965b lim: 120 exec/s: 34 rss: 74Mb L: 116/120 MS: 1 ChangeBit- 00:08:47.285 [2024-11-05 10:36:13.350157] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.350185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.285 [2024-11-05 10:36:13.350233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.350251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.285 [2024-11-05 10:36:13.350291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.350309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.285 [2024-11-05 10:36:13.350364] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.285 [2024-11-05 10:36:13.350381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:47.544 #35 NEW cov: 12560 ft: 15505 corp: 20/2081b lim: 120 exec/s: 35 rss: 74Mb L: 116/120 MS: 1 ShuffleBytes- 00:08:47.544 [2024-11-05 10:36:13.410318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65318 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.544 [2024-11-05 10:36:13.410347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.544 [2024-11-05 10:36:13.410394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.544 [2024-11-05 10:36:13.410411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.544 [2024-11-05 10:36:13.410450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.544 [2024-11-05 10:36:13.410464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.544 [2024-11-05 10:36:13.410520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073259778047 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.544 [2024-11-05 10:36:13.410536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:47.544 #36 NEW cov: 12560 ft: 15528 corp: 21/2198b lim: 120 exec/s: 36 rss: 75Mb L: 117/120 MS: 1 InsertByte- 00:08:47.544 [2024-11-05 10:36:13.470459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.544 [2024-11-05 10:36:13.470489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.544 [2024-11-05 10:36:13.470543] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18442240474082181119 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.544 [2024-11-05 10:36:13.470559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.544 [2024-11-05 10:36:13.470615] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.544 [2024-11-05 10:36:13.470632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.544 [2024-11-05 10:36:13.470689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.544 [2024-11-05 10:36:13.470706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:47.544 #37 NEW cov: 12560 ft: 15562 corp: 22/2317b lim: 120 exec/s: 37 rss: 75Mb L: 119/120 MS: 1 ChangeBit- 00:08:47.544 [2024-11-05 10:36:13.530626] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.544 [2024-11-05 10:36:13.530655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.544 [2024-11-05 10:36:13.530705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.544 [2024-11-05 10:36:13.530727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.544 [2024-11-05 10:36:13.530784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.544 [2024-11-05 10:36:13.530801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.544 [2024-11-05 10:36:13.530871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744070236667903 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.544 [2024-11-05 10:36:13.530891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:47.544 #38 NEW cov: 12560 ft: 15568 corp: 23/2433b lim: 120 exec/s: 38 rss: 75Mb L: 116/120 MS: 1 PersAutoDict- DE: "\000\000\177`X\016\3450"- 00:08:47.544 [2024-11-05 10:36:13.570773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069590417407 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.544 [2024-11-05 10:36:13.570802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.544 [2024-11-05 10:36:13.570867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.544 [2024-11-05 10:36:13.570885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.544 [2024-11-05 10:36:13.570936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.544 [2024-11-05 10:36:13.570953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.544 [2024-11-05 10:36:13.571009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073259778047 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.544 [2024-11-05 10:36:13.571026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:47.544 #39 NEW cov: 12560 ft: 15575 corp: 24/2550b lim: 120 exec/s: 39 rss: 75Mb L: 117/120 MS: 1 InsertByte- 00:08:47.803 [2024-11-05 10:36:13.630993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.631022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.803 [2024-11-05 10:36:13.631071] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.631088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.803 [2024-11-05 10:36:13.631127] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.631142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.803 [2024-11-05 10:36:13.631198] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744069664485631 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.631216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:47.803 #40 NEW cov: 12560 ft: 15577 corp: 25/2666b lim: 120 exec/s: 40 rss: 75Mb L: 116/120 MS: 1 CrossOver- 00:08:47.803 [2024-11-05 10:36:13.671039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.671066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.803 [2024-11-05 10:36:13.671116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18442240474082181119 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.671132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.803 [2024-11-05 10:36:13.671185] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.671203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.803 [2024-11-05 10:36:13.671258] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.671275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:47.803 #41 NEW cov: 12560 ft: 15586 corp: 26/2785b lim: 120 exec/s: 41 rss: 75Mb L: 119/120 MS: 1 CopyPart- 00:08:47.803 [2024-11-05 10:36:13.731285] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.731313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.803 [2024-11-05 10:36:13.731372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446743790241710079 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.731385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.803 [2024-11-05 10:36:13.731441] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.731457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.803 [2024-11-05 10:36:13.731513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744070891955504 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.731530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:47.803 #42 NEW cov: 12560 ft: 15606 corp: 27/2904b lim: 120 exec/s: 42 rss: 75Mb L: 119/120 MS: 1 InsertRepeatedBytes- 00:08:47.803 [2024-11-05 10:36:13.771308] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.771337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.803 [2024-11-05 10:36:13.771388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18376657904020226047 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.771405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.803 [2024-11-05 10:36:13.771450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.771467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.803 [2024-11-05 10:36:13.771523] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744070236667903 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.771539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:47.803 #43 NEW cov: 12560 ft: 15636 corp: 28/3020b lim: 120 exec/s: 43 rss: 75Mb L: 116/120 MS: 1 ChangeBinInt- 00:08:47.803 [2024-11-05 10:36:13.811255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11285066962739960988 len:40093 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.811284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.803 [2024-11-05 10:36:13.811348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.811361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.803 [2024-11-05 10:36:13.811417] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.811432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.803 #49 NEW cov: 12560 ft: 15671 corp: 29/3102b lim: 120 exec/s: 49 rss: 75Mb L: 82/120 MS: 1 ChangeByte- 00:08:47.803 [2024-11-05 10:36:13.871831] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.871859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:47.803 [2024-11-05 10:36:13.871909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18442240474082181119 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.803 [2024-11-05 10:36:13.871926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:47.804 [2024-11-05 10:36:13.871967] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.804 [2024-11-05 10:36:13.871982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:47.804 [2024-11-05 10:36:13.872035] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.804 [2024-11-05 10:36:13.872051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:47.804 [2024-11-05 10:36:13.872123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.804 [2024-11-05 10:36:13.872141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:48.062 #50 NEW cov: 12560 ft: 15691 corp: 30/3222b lim: 120 exec/s: 50 rss: 75Mb L: 120/120 MS: 1 InsertByte- 00:08:48.062 [2024-11-05 10:36:13.931801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.062 [2024-11-05 10:36:13.931830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:48.062 [2024-11-05 10:36:13.931879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.062 [2024-11-05 10:36:13.931895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:48.062 [2024-11-05 10:36:13.931935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.062 [2024-11-05 10:36:13.931952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:48.062 [2024-11-05 10:36:13.932009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.062 [2024-11-05 10:36:13.932026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:48.062 #51 NEW cov: 12560 ft: 15697 corp: 31/3338b lim: 120 exec/s: 51 rss: 75Mb L: 116/120 MS: 1 ChangeByte- 00:08:48.062 [2024-11-05 10:36:13.992017] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.062 [2024-11-05 10:36:13.992046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:48.062 [2024-11-05 10:36:13.992099] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.062 [2024-11-05 10:36:13.992116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:48.062 [2024-11-05 10:36:13.992170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.062 [2024-11-05 10:36:13.992186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:48.062 [2024-11-05 10:36:13.992240] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744069414584831 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.062 [2024-11-05 10:36:13.992256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:48.062 #52 NEW cov: 12560 ft: 15709 corp: 32/3455b lim: 120 exec/s: 52 rss: 75Mb L: 117/120 MS: 1 InsertByte- 00:08:48.062 [2024-11-05 10:36:14.032300] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.062 [2024-11-05 10:36:14.032331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:48.062 [2024-11-05 10:36:14.032378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1012763458879356686 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.062 [2024-11-05 10:36:14.032396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:48.062 [2024-11-05 10:36:14.032442] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.062 [2024-11-05 10:36:14.032461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:48.062 [2024-11-05 10:36:14.032515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:3530822105179885285 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.062 [2024-11-05 10:36:14.032532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:48.062 [2024-11-05 10:36:14.032589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.062 [2024-11-05 10:36:14.032607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:48.062 #53 NEW cov: 12560 ft: 15713 corp: 33/3575b lim: 120 exec/s: 53 rss: 75Mb L: 120/120 MS: 1 ChangeBit- 00:08:48.062 [2024-11-05 10:36:14.092450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.062 [2024-11-05 10:36:14.092477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:48.062 [2024-11-05 10:36:14.092527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18442240474082181119 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.062 [2024-11-05 10:36:14.092545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:48.062 [2024-11-05 10:36:14.092595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.062 [2024-11-05 10:36:14.092611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:48.062 [2024-11-05 10:36:14.092667] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.062 [2024-11-05 10:36:14.092684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:48.062 [2024-11-05 10:36:14.092751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.062 [2024-11-05 10:36:14.092768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:48.062 #54 NEW cov: 12560 ft: 15718 corp: 34/3695b lim: 120 exec/s: 54 rss: 75Mb L: 120/120 MS: 1 ShuffleBytes- 00:08:48.321 [2024-11-05 10:36:14.152422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65318 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.321 [2024-11-05 10:36:14.152451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:48.321 [2024-11-05 10:36:14.152503] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.321 [2024-11-05 10:36:14.152520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:48.321 [2024-11-05 10:36:14.152567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.321 [2024-11-05 10:36:14.152584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:48.321 [2024-11-05 10:36:14.152638] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073259778047 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.321 [2024-11-05 10:36:14.152655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:48.321 #55 NEW cov: 12560 ft: 15735 corp: 35/3812b lim: 120 exec/s: 27 rss: 75Mb L: 117/120 MS: 1 ChangeBinInt- 00:08:48.321 #55 DONE cov: 12560 ft: 15735 corp: 35/3812b lim: 120 exec/s: 27 rss: 75Mb 00:08:48.321 ###### Recommended dictionary. ###### 00:08:48.321 "\000\000\177`X\016\3450" # Uses: 4 00:08:48.321 ###### End of recommended dictionary. ###### 00:08:48.321 Done 55 runs in 2 second(s) 00:08:48.321 10:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:08:48.321 10:36:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:48.321 10:36:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:48.321 10:36:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:08:48.321 10:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:08:48.321 10:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:48.321 10:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:48.321 10:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:48.321 10:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:08:48.321 10:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:48.321 10:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:48.321 10:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:08:48.321 10:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:08:48.321 10:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:48.321 10:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:08:48.321 10:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:48.321 10:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:48.321 10:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:48.321 10:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:08:48.321 [2024-11-05 10:36:14.333509] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:48.321 [2024-11-05 10:36:14.333564] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2867146 ] 00:08:48.580 [2024-11-05 10:36:14.562970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.580 [2024-11-05 10:36:14.610691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.839 [2024-11-05 10:36:14.674682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.839 [2024-11-05 10:36:14.690927] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:08:48.839 INFO: Running with entropic power schedule (0xFF, 100). 00:08:48.839 INFO: Seed: 1928193816 00:08:48.839 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:48.839 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:48.839 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:48.839 INFO: A corpus is not provided, starting from an empty corpus 00:08:48.839 #2 INITED exec/s: 0 rss: 66Mb 00:08:48.839 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:48.839 This may also happen if the target rejected all inputs we tried so far 00:08:48.839 [2024-11-05 10:36:14.763053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:48.839 [2024-11-05 10:36:14.763100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:48.839 [2024-11-05 10:36:14.763171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:48.839 [2024-11-05 10:36:14.763192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:49.405 NEW_FUNC[1/712]: 0x459378 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:08:49.405 NEW_FUNC[2/712]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:49.405 #18 NEW cov: 12240 ft: 12270 corp: 2/50b lim: 100 exec/s: 0 rss: 73Mb L: 49/49 MS: 1 InsertRepeatedBytes- 00:08:49.405 [2024-11-05 10:36:15.243804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:49.405 [2024-11-05 10:36:15.243851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:49.405 [2024-11-05 10:36:15.243932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:49.405 [2024-11-05 10:36:15.243953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:49.405 NEW_FUNC[1/3]: 0x1f781c8 in spdk_thread_get_from_ctx /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:820 00:08:49.405 NEW_FUNC[2/3]: 0x1f78368 in spdk_thread_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1180 00:08:49.405 #19 NEW cov: 12389 ft: 12825 corp: 3/99b lim: 100 exec/s: 0 rss: 73Mb L: 49/49 MS: 1 ShuffleBytes- 00:08:49.405 [2024-11-05 10:36:15.324153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:49.405 [2024-11-05 10:36:15.324183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:49.405 [2024-11-05 10:36:15.324265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:49.405 [2024-11-05 10:36:15.324282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:49.405 #25 NEW cov: 12395 ft: 13108 corp: 4/148b lim: 100 exec/s: 0 rss: 73Mb L: 49/49 MS: 1 CopyPart- 00:08:49.405 [2024-11-05 10:36:15.374350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:49.405 [2024-11-05 10:36:15.374379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:49.405 [2024-11-05 10:36:15.374457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:49.405 [2024-11-05 10:36:15.374475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:49.405 #26 NEW cov: 12480 ft: 13396 corp: 5/197b lim: 100 exec/s: 0 rss: 73Mb L: 49/49 MS: 1 ChangeBit- 00:08:49.405 [2024-11-05 10:36:15.444760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:49.405 [2024-11-05 10:36:15.444790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:49.405 [2024-11-05 10:36:15.444881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:49.405 [2024-11-05 10:36:15.444899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:49.405 #27 NEW cov: 12480 ft: 13545 corp: 6/246b lim: 100 exec/s: 0 rss: 73Mb L: 49/49 MS: 1 ChangeBinInt- 00:08:49.664 [2024-11-05 10:36:15.495064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:49.664 [2024-11-05 10:36:15.495094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:49.664 [2024-11-05 10:36:15.495148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:49.664 [2024-11-05 10:36:15.495163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:49.664 #28 NEW cov: 12480 ft: 13604 corp: 7/302b lim: 100 exec/s: 0 rss: 73Mb L: 56/56 MS: 1 InsertRepeatedBytes- 00:08:49.664 [2024-11-05 10:36:15.565596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:49.664 [2024-11-05 10:36:15.565624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:49.664 [2024-11-05 10:36:15.565704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:49.664 [2024-11-05 10:36:15.565725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:49.664 #29 NEW cov: 12480 ft: 13675 corp: 8/351b lim: 100 exec/s: 0 rss: 73Mb L: 49/56 MS: 1 CopyPart- 00:08:49.664 [2024-11-05 10:36:15.616100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:49.664 [2024-11-05 10:36:15.616128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:49.664 [2024-11-05 10:36:15.616202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:49.664 [2024-11-05 10:36:15.616217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:49.664 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:49.664 #30 NEW cov: 12503 ft: 13724 corp: 9/400b lim: 100 exec/s: 0 rss: 73Mb L: 49/56 MS: 1 ChangeBinInt- 00:08:49.664 [2024-11-05 10:36:15.666982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:49.664 [2024-11-05 10:36:15.667010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:49.664 [2024-11-05 10:36:15.667111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:49.664 [2024-11-05 10:36:15.667128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:49.664 [2024-11-05 10:36:15.667205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:49.664 [2024-11-05 10:36:15.667223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:49.664 #33 NEW cov: 12503 ft: 14049 corp: 10/476b lim: 100 exec/s: 0 rss: 73Mb L: 76/76 MS: 3 ChangeByte-CopyPart-InsertRepeatedBytes- 00:08:49.664 [2024-11-05 10:36:15.717126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:49.664 [2024-11-05 10:36:15.717153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:49.664 [2024-11-05 10:36:15.717248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:49.664 [2024-11-05 10:36:15.717265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:49.664 #34 NEW cov: 12503 ft: 14077 corp: 11/525b lim: 100 exec/s: 34 rss: 73Mb L: 49/76 MS: 1 ShuffleBytes- 00:08:49.922 [2024-11-05 10:36:15.767722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:49.922 [2024-11-05 10:36:15.767749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:49.922 [2024-11-05 10:36:15.767835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:49.922 [2024-11-05 10:36:15.767867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:49.922 #35 NEW cov: 12503 ft: 14087 corp: 12/574b lim: 100 exec/s: 35 rss: 73Mb L: 49/76 MS: 1 ChangeByte- 00:08:49.922 [2024-11-05 10:36:15.818131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:49.922 [2024-11-05 10:36:15.818158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:49.922 [2024-11-05 10:36:15.818256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:49.922 [2024-11-05 10:36:15.818273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:49.922 #36 NEW cov: 12503 ft: 14126 corp: 13/623b lim: 100 exec/s: 36 rss: 73Mb L: 49/76 MS: 1 ChangeBit- 00:08:49.922 [2024-11-05 10:36:15.889280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:49.922 [2024-11-05 10:36:15.889308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:49.922 [2024-11-05 10:36:15.889405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:49.922 [2024-11-05 10:36:15.889423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:49.922 [2024-11-05 10:36:15.889524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:49.922 [2024-11-05 10:36:15.889543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:49.922 [2024-11-05 10:36:15.889638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:49.922 [2024-11-05 10:36:15.889658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:49.922 #37 NEW cov: 12503 ft: 14397 corp: 14/719b lim: 100 exec/s: 37 rss: 73Mb L: 96/96 MS: 1 InsertRepeatedBytes- 00:08:49.922 [2024-11-05 10:36:15.958992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:49.922 [2024-11-05 10:36:15.959020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:49.922 [2024-11-05 10:36:15.959103] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:49.922 [2024-11-05 10:36:15.959121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:49.922 #38 NEW cov: 12503 ft: 14402 corp: 15/766b lim: 100 exec/s: 38 rss: 73Mb L: 47/96 MS: 1 EraseBytes- 00:08:50.180 [2024-11-05 10:36:16.009249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:50.180 [2024-11-05 10:36:16.009281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:50.180 [2024-11-05 10:36:16.009364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:50.180 [2024-11-05 10:36:16.009381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:50.180 #39 NEW cov: 12503 ft: 14433 corp: 16/815b lim: 100 exec/s: 39 rss: 73Mb L: 49/96 MS: 1 ChangeBinInt- 00:08:50.180 [2024-11-05 10:36:16.059982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:50.180 [2024-11-05 10:36:16.060010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:50.180 [2024-11-05 10:36:16.060091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:50.180 [2024-11-05 10:36:16.060110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:50.180 [2024-11-05 10:36:16.060198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:50.180 [2024-11-05 10:36:16.060215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:50.180 #40 NEW cov: 12503 ft: 14448 corp: 17/877b lim: 100 exec/s: 40 rss: 73Mb L: 62/96 MS: 1 CopyPart- 00:08:50.180 [2024-11-05 10:36:16.110042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:50.180 [2024-11-05 10:36:16.110072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:50.180 [2024-11-05 10:36:16.110135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:50.180 [2024-11-05 10:36:16.110157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:50.180 #41 NEW cov: 12503 ft: 14494 corp: 18/934b lim: 100 exec/s: 41 rss: 74Mb L: 57/96 MS: 1 InsertByte- 00:08:50.180 [2024-11-05 10:36:16.180365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:50.180 [2024-11-05 10:36:16.180397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:50.180 [2024-11-05 10:36:16.180476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:50.180 [2024-11-05 10:36:16.180498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:50.180 #42 NEW cov: 12503 ft: 14525 corp: 19/983b lim: 100 exec/s: 42 rss: 74Mb L: 49/96 MS: 1 ShuffleBytes- 00:08:50.180 [2024-11-05 10:36:16.251075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:50.180 [2024-11-05 10:36:16.251105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:50.180 [2024-11-05 10:36:16.251182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:50.180 [2024-11-05 10:36:16.251199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:50.180 [2024-11-05 10:36:16.251297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:50.180 [2024-11-05 10:36:16.251315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:50.439 #43 NEW cov: 12503 ft: 14556 corp: 20/1050b lim: 100 exec/s: 43 rss: 74Mb L: 67/96 MS: 1 EraseBytes- 00:08:50.439 [2024-11-05 10:36:16.321418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:50.439 [2024-11-05 10:36:16.321445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:50.439 [2024-11-05 10:36:16.321549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:50.439 [2024-11-05 10:36:16.321567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:50.439 [2024-11-05 10:36:16.321659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:50.439 [2024-11-05 10:36:16.321676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:50.439 #44 NEW cov: 12503 ft: 14567 corp: 21/1112b lim: 100 exec/s: 44 rss: 74Mb L: 62/96 MS: 1 ShuffleBytes- 00:08:50.440 [2024-11-05 10:36:16.391654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:50.440 [2024-11-05 10:36:16.391680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:50.440 [2024-11-05 10:36:16.391790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:50.440 [2024-11-05 10:36:16.391810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:50.440 [2024-11-05 10:36:16.391906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:50.440 [2024-11-05 10:36:16.391919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:50.440 #45 NEW cov: 12503 ft: 14579 corp: 22/1188b lim: 100 exec/s: 45 rss: 74Mb L: 76/96 MS: 1 ChangeBinInt- 00:08:50.440 [2024-11-05 10:36:16.442364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:50.440 [2024-11-05 10:36:16.442390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:50.440 [2024-11-05 10:36:16.442495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:50.440 [2024-11-05 10:36:16.442513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:50.440 [2024-11-05 10:36:16.442586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:50.440 [2024-11-05 10:36:16.442606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:50.440 [2024-11-05 10:36:16.442704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:50.440 [2024-11-05 10:36:16.442727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:50.440 #46 NEW cov: 12503 ft: 14655 corp: 23/1282b lim: 100 exec/s: 46 rss: 74Mb L: 94/96 MS: 1 InsertRepeatedBytes- 00:08:50.440 [2024-11-05 10:36:16.492420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:50.440 [2024-11-05 10:36:16.492450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:50.440 [2024-11-05 10:36:16.492551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:50.440 [2024-11-05 10:36:16.492569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:50.440 [2024-11-05 10:36:16.492668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:50.440 [2024-11-05 10:36:16.492687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:50.440 #49 NEW cov: 12503 ft: 14713 corp: 24/1354b lim: 100 exec/s: 49 rss: 74Mb L: 72/96 MS: 3 CopyPart-ChangeBit-InsertRepeatedBytes- 00:08:50.729 [2024-11-05 10:36:16.543001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:50.729 [2024-11-05 10:36:16.543026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:50.729 [2024-11-05 10:36:16.543141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:50.729 [2024-11-05 10:36:16.543160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:50.729 [2024-11-05 10:36:16.543250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:50.729 [2024-11-05 10:36:16.543266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:50.729 [2024-11-05 10:36:16.543358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:50.729 [2024-11-05 10:36:16.543377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:50.729 #50 NEW cov: 12503 ft: 14733 corp: 25/1448b lim: 100 exec/s: 50 rss: 74Mb L: 94/96 MS: 1 ChangeByte- 00:08:50.729 [2024-11-05 10:36:16.613413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:50.729 [2024-11-05 10:36:16.613442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:50.729 [2024-11-05 10:36:16.613544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:50.729 [2024-11-05 10:36:16.613560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:50.729 [2024-11-05 10:36:16.613652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:50.729 [2024-11-05 10:36:16.613667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:50.729 [2024-11-05 10:36:16.613743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:50.729 [2024-11-05 10:36:16.613764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:50.729 #51 NEW cov: 12503 ft: 14741 corp: 26/1533b lim: 100 exec/s: 51 rss: 74Mb L: 85/96 MS: 1 CopyPart- 00:08:50.729 [2024-11-05 10:36:16.683314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:50.729 [2024-11-05 10:36:16.683341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:50.729 [2024-11-05 10:36:16.683419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:50.729 [2024-11-05 10:36:16.683436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:50.729 #52 NEW cov: 12503 ft: 14746 corp: 27/1582b lim: 100 exec/s: 52 rss: 74Mb L: 49/96 MS: 1 ChangeBinInt- 00:08:50.729 [2024-11-05 10:36:16.733817] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:50.729 [2024-11-05 10:36:16.733846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:50.729 [2024-11-05 10:36:16.733932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:50.729 [2024-11-05 10:36:16.733951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:50.729 #53 NEW cov: 12503 ft: 14759 corp: 28/1637b lim: 100 exec/s: 26 rss: 74Mb L: 55/96 MS: 1 EraseBytes- 00:08:50.729 #53 DONE cov: 12503 ft: 14759 corp: 28/1637b lim: 100 exec/s: 26 rss: 74Mb 00:08:50.729 Done 53 runs in 2 second(s) 00:08:51.027 10:36:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:08:51.027 10:36:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:51.027 10:36:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:51.027 10:36:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:08:51.027 10:36:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:08:51.027 10:36:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:51.027 10:36:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:51.027 10:36:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:51.027 10:36:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:08:51.027 10:36:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:51.027 10:36:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:51.027 10:36:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:08:51.027 10:36:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:08:51.027 10:36:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:51.027 10:36:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:08:51.027 10:36:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:51.027 10:36:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:51.027 10:36:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:51.027 10:36:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:08:51.027 [2024-11-05 10:36:16.916679] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:51.027 [2024-11-05 10:36:16.916758] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2867501 ] 00:08:51.339 [2024-11-05 10:36:17.181941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.339 [2024-11-05 10:36:17.231084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.339 [2024-11-05 10:36:17.295032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.339 [2024-11-05 10:36:17.311259] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:08:51.339 INFO: Running with entropic power schedule (0xFF, 100). 00:08:51.339 INFO: Seed: 253223839 00:08:51.340 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:51.340 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:51.340 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:51.340 INFO: A corpus is not provided, starting from an empty corpus 00:08:51.340 #2 INITED exec/s: 0 rss: 66Mb 00:08:51.340 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:51.340 This may also happen if the target rejected all inputs we tried so far 00:08:51.340 [2024-11-05 10:36:17.382247] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:11 00:08:51.340 [2024-11-05 10:36:17.382294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:51.856 NEW_FUNC[1/715]: 0x45c338 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:08:51.856 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:51.856 #10 NEW cov: 12250 ft: 12247 corp: 2/12b lim: 50 exec/s: 0 rss: 73Mb L: 11/11 MS: 3 CopyPart-CrossOver-InsertRepeatedBytes- 00:08:51.856 [2024-11-05 10:36:17.863306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:838860800 len:11 00:08:51.856 [2024-11-05 10:36:17.863358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:51.856 #11 NEW cov: 12367 ft: 12832 corp: 3/23b lim: 50 exec/s: 0 rss: 73Mb L: 11/11 MS: 1 ChangeByte- 00:08:51.856 [2024-11-05 10:36:17.933500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2814750605967360 len:11 00:08:51.856 [2024-11-05 10:36:17.933531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:52.114 #14 NEW cov: 12373 ft: 13101 corp: 4/34b lim: 50 exec/s: 0 rss: 73Mb L: 11/11 MS: 3 EraseBytes-ChangeBit-CrossOver- 00:08:52.114 [2024-11-05 10:36:18.004476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782938129862929 len:4370 00:08:52.114 [2024-11-05 10:36:18.004507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:52.114 [2024-11-05 10:36:18.004587] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:08:52.114 [2024-11-05 10:36:18.004606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:52.114 [2024-11-05 10:36:18.004692] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1229782938247303441 len:4370 00:08:52.114 [2024-11-05 10:36:18.004716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:52.114 #15 NEW cov: 12458 ft: 13784 corp: 5/68b lim: 50 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:08:52.114 [2024-11-05 10:36:18.054929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782938129862929 len:4370 00:08:52.114 [2024-11-05 10:36:18.054959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:52.114 [2024-11-05 10:36:18.055045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:08:52.114 [2024-11-05 10:36:18.055063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:52.114 [2024-11-05 10:36:18.055148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1229782938247303441 len:4370 00:08:52.114 [2024-11-05 10:36:18.055167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:52.114 #21 NEW cov: 12458 ft: 13855 corp: 6/101b lim: 50 exec/s: 0 rss: 73Mb L: 33/34 MS: 1 EraseBytes- 00:08:52.114 [2024-11-05 10:36:18.124984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2814750605967360 len:235 00:08:52.114 [2024-11-05 10:36:18.125018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:52.114 #22 NEW cov: 12458 ft: 13920 corp: 7/112b lim: 50 exec/s: 0 rss: 73Mb L: 11/34 MS: 1 ChangeByte- 00:08:52.373 [2024-11-05 10:36:18.195657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:838860800 len:37 00:08:52.373 [2024-11-05 10:36:18.195690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:52.373 [2024-11-05 10:36:18.195769] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2604246222170760228 len:9253 00:08:52.373 [2024-11-05 10:36:18.195789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:52.373 #23 NEW cov: 12458 ft: 14240 corp: 8/138b lim: 50 exec/s: 0 rss: 73Mb L: 26/34 MS: 1 InsertRepeatedBytes- 00:08:52.373 [2024-11-05 10:36:18.245791] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:842989568 len:11 00:08:52.373 [2024-11-05 10:36:18.245820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:52.373 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:52.373 #24 NEW cov: 12481 ft: 14288 corp: 9/149b lim: 50 exec/s: 0 rss: 73Mb L: 11/34 MS: 1 ChangeByte- 00:08:52.373 [2024-11-05 10:36:18.296718] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782938129862929 len:4370 00:08:52.373 [2024-11-05 10:36:18.296748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:52.373 [2024-11-05 10:36:18.296833] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:08:52.373 [2024-11-05 10:36:18.296850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:52.373 #25 NEW cov: 12481 ft: 14316 corp: 10/171b lim: 50 exec/s: 0 rss: 73Mb L: 22/34 MS: 1 EraseBytes- 00:08:52.373 [2024-11-05 10:36:18.367257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782938247301649 len:4370 00:08:52.373 [2024-11-05 10:36:18.367285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:52.373 [2024-11-05 10:36:18.367383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:08:52.373 [2024-11-05 10:36:18.367401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:52.373 [2024-11-05 10:36:18.367499] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1229782938247303441 len:4370 00:08:52.373 [2024-11-05 10:36:18.367517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:52.373 #26 NEW cov: 12481 ft: 14378 corp: 11/204b lim: 50 exec/s: 26 rss: 74Mb L: 33/34 MS: 1 ShuffleBytes- 00:08:52.373 [2024-11-05 10:36:18.417784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782938129862929 len:4370 00:08:52.373 [2024-11-05 10:36:18.417814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:52.373 [2024-11-05 10:36:18.417897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:08:52.373 [2024-11-05 10:36:18.417918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:52.373 [2024-11-05 10:36:18.418009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1229782938247303441 len:4370 00:08:52.373 [2024-11-05 10:36:18.418026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:52.373 #27 NEW cov: 12481 ft: 14406 corp: 12/237b lim: 50 exec/s: 27 rss: 74Mb L: 33/34 MS: 1 ShuffleBytes- 00:08:52.631 [2024-11-05 10:36:18.467992] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:838860800 len:37 00:08:52.631 [2024-11-05 10:36:18.468021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:52.631 [2024-11-05 10:36:18.468108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744070020988927 len:65536 00:08:52.631 [2024-11-05 10:36:18.468128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:52.631 [2024-11-05 10:36:18.468223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18384859320165597183 len:9253 00:08:52.631 [2024-11-05 10:36:18.468241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:52.631 #28 NEW cov: 12481 ft: 14463 corp: 13/276b lim: 50 exec/s: 28 rss: 74Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:08:52.631 [2024-11-05 10:36:18.537658] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3026429945548111872 len:1 00:08:52.631 [2024-11-05 10:36:18.537687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:52.631 #29 NEW cov: 12481 ft: 14489 corp: 14/288b lim: 50 exec/s: 29 rss: 74Mb L: 12/39 MS: 1 InsertByte- 00:08:52.631 [2024-11-05 10:36:18.608810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:73300773376 len:4370 00:08:52.631 [2024-11-05 10:36:18.608839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:52.631 [2024-11-05 10:36:18.608929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:08:52.631 [2024-11-05 10:36:18.608950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:52.631 [2024-11-05 10:36:18.609040] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1229782938247303441 len:4370 00:08:52.631 [2024-11-05 10:36:18.609061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:52.631 #30 NEW cov: 12481 ft: 14507 corp: 15/325b lim: 50 exec/s: 30 rss: 74Mb L: 37/39 MS: 1 CMP- DE: "\000\000\000\000"- 00:08:52.631 [2024-11-05 10:36:18.678830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2814750605967360 len:11 00:08:52.631 [2024-11-05 10:36:18.678869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:52.631 #31 NEW cov: 12481 ft: 14511 corp: 16/336b lim: 50 exec/s: 31 rss: 74Mb L: 11/39 MS: 1 ChangeBit- 00:08:52.890 [2024-11-05 10:36:18.728991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:44 00:08:52.890 [2024-11-05 10:36:18.729025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:52.890 #32 NEW cov: 12481 ft: 14548 corp: 17/351b lim: 50 exec/s: 32 rss: 74Mb L: 15/39 MS: 1 InsertRepeatedBytes- 00:08:52.890 [2024-11-05 10:36:18.779421] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:842989568 len:1 00:08:52.890 [2024-11-05 10:36:18.779450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:52.890 #33 NEW cov: 12481 ft: 14636 corp: 18/369b lim: 50 exec/s: 33 rss: 74Mb L: 18/39 MS: 1 InsertRepeatedBytes- 00:08:52.890 [2024-11-05 10:36:18.849793] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2814750605967360 len:190 00:08:52.890 [2024-11-05 10:36:18.849826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:52.890 #34 NEW cov: 12481 ft: 14644 corp: 19/380b lim: 50 exec/s: 34 rss: 74Mb L: 11/39 MS: 1 ChangeByte- 00:08:52.890 [2024-11-05 10:36:18.920077] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:838860800 len:11 00:08:52.890 [2024-11-05 10:36:18.920109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:52.890 #35 NEW cov: 12481 ft: 14686 corp: 20/391b lim: 50 exec/s: 35 rss: 74Mb L: 11/39 MS: 1 ChangeBit- 00:08:53.148 [2024-11-05 10:36:18.970520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16493559404080194788 len:58597 00:08:53.148 [2024-11-05 10:36:18.970549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:53.148 [2024-11-05 10:36:18.970629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11821995811726336 len:1 00:08:53.148 [2024-11-05 10:36:18.970648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:53.148 #36 NEW cov: 12481 ft: 14725 corp: 21/414b lim: 50 exec/s: 36 rss: 74Mb L: 23/39 MS: 1 InsertRepeatedBytes- 00:08:53.148 [2024-11-05 10:36:19.040390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2814750605967360 len:190 00:08:53.148 [2024-11-05 10:36:19.040421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:53.148 #37 NEW cov: 12482 ft: 14739 corp: 22/425b lim: 50 exec/s: 37 rss: 74Mb L: 11/39 MS: 1 ChangeBinInt- 00:08:53.148 [2024-11-05 10:36:19.110933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:838860800 len:9253 00:08:53.148 [2024-11-05 10:36:19.110960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:53.148 [2024-11-05 10:36:19.111014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2604246222170760228 len:9253 00:08:53.148 [2024-11-05 10:36:19.111036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:53.148 #38 NEW cov: 12482 ft: 14754 corp: 23/449b lim: 50 exec/s: 38 rss: 74Mb L: 24/39 MS: 1 EraseBytes- 00:08:53.148 [2024-11-05 10:36:19.161725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:838860800 len:1 00:08:53.148 [2024-11-05 10:36:19.161754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:53.148 [2024-11-05 10:36:19.161850] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:53.148 [2024-11-05 10:36:19.161871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:53.148 [2024-11-05 10:36:19.161968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:53.148 [2024-11-05 10:36:19.161988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:53.148 [2024-11-05 10:36:19.162082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:53.148 [2024-11-05 10:36:19.162100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:53.148 #39 NEW cov: 12482 ft: 15033 corp: 24/492b lim: 50 exec/s: 39 rss: 74Mb L: 43/43 MS: 1 InsertRepeatedBytes- 00:08:53.407 [2024-11-05 10:36:19.231480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2814750605967360 len:1 00:08:53.407 [2024-11-05 10:36:19.231510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:53.407 [2024-11-05 10:36:19.231578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:53269139343147008 len:190 00:08:53.407 [2024-11-05 10:36:19.231595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:53.407 #40 NEW cov: 12482 ft: 15035 corp: 25/513b lim: 50 exec/s: 40 rss: 74Mb L: 21/43 MS: 1 CopyPart- 00:08:53.407 [2024-11-05 10:36:19.281917] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:838860850 len:2561 00:08:53.407 [2024-11-05 10:36:19.281948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:53.407 [2024-11-05 10:36:19.282024] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10995116280384 len:1 00:08:53.407 [2024-11-05 10:36:19.282043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:53.407 #41 NEW cov: 12482 ft: 15107 corp: 26/535b lim: 50 exec/s: 41 rss: 74Mb L: 22/43 MS: 1 CopyPart- 00:08:53.407 [2024-11-05 10:36:19.332395] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1229782938129862929 len:4370 00:08:53.407 [2024-11-05 10:36:19.332423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:53.407 [2024-11-05 10:36:19.332513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1229782938247303441 len:4370 00:08:53.407 [2024-11-05 10:36:19.332530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:53.407 [2024-11-05 10:36:19.332614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17216961144068959982 len:4370 00:08:53.407 [2024-11-05 10:36:19.332634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:53.407 #42 NEW cov: 12482 ft: 15186 corp: 27/568b lim: 50 exec/s: 21 rss: 74Mb L: 33/43 MS: 1 ChangeBinInt- 00:08:53.407 #42 DONE cov: 12482 ft: 15186 corp: 27/568b lim: 50 exec/s: 21 rss: 74Mb 00:08:53.407 ###### Recommended dictionary. ###### 00:08:53.407 "\000\000\000\000" # Uses: 0 00:08:53.407 ###### End of recommended dictionary. ###### 00:08:53.407 Done 42 runs in 2 second(s) 00:08:53.665 10:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:08:53.665 10:36:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:53.665 10:36:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:53.665 10:36:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:08:53.665 10:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:08:53.665 10:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:53.665 10:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:53.665 10:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:53.665 10:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:08:53.665 10:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:53.665 10:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:53.665 10:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:08:53.665 10:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:08:53.665 10:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:53.665 10:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:08:53.665 10:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:53.665 10:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:53.665 10:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:53.665 10:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:08:53.665 [2024-11-05 10:36:19.532711] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:53.665 [2024-11-05 10:36:19.532786] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2867868 ] 00:08:53.924 [2024-11-05 10:36:19.781517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.924 [2024-11-05 10:36:19.829358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.924 [2024-11-05 10:36:19.893245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.924 [2024-11-05 10:36:19.909472] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:53.924 INFO: Running with entropic power schedule (0xFF, 100). 00:08:53.924 INFO: Seed: 2851224480 00:08:53.924 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:53.924 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:53.924 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:53.924 INFO: A corpus is not provided, starting from an empty corpus 00:08:53.924 #2 INITED exec/s: 0 rss: 66Mb 00:08:53.924 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:53.924 This may also happen if the target rejected all inputs we tried so far 00:08:53.924 [2024-11-05 10:36:19.958727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:53.924 [2024-11-05 10:36:19.958758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:53.924 [2024-11-05 10:36:19.958813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:53.924 [2024-11-05 10:36:19.958826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:53.924 [2024-11-05 10:36:19.958881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:53.924 [2024-11-05 10:36:19.958898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:54.440 NEW_FUNC[1/717]: 0x45def8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:08:54.440 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:54.440 #12 NEW cov: 12311 ft: 12309 corp: 2/71b lim: 90 exec/s: 0 rss: 73Mb L: 70/70 MS: 5 CrossOver-ShuffleBytes-CopyPart-CrossOver-InsertRepeatedBytes- 00:08:54.440 [2024-11-05 10:36:20.420067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:54.440 [2024-11-05 10:36:20.420109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.440 [2024-11-05 10:36:20.420157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:54.440 [2024-11-05 10:36:20.420172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.440 [2024-11-05 10:36:20.420230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:54.440 [2024-11-05 10:36:20.420246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:54.440 #18 NEW cov: 12425 ft: 12661 corp: 3/141b lim: 90 exec/s: 0 rss: 73Mb L: 70/70 MS: 1 ChangeBinInt- 00:08:54.440 [2024-11-05 10:36:20.479966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:54.440 [2024-11-05 10:36:20.479995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.440 [2024-11-05 10:36:20.480056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:54.440 [2024-11-05 10:36:20.480072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.698 #19 NEW cov: 12431 ft: 13328 corp: 4/189b lim: 90 exec/s: 0 rss: 73Mb L: 48/70 MS: 1 EraseBytes- 00:08:54.698 [2024-11-05 10:36:20.540248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:54.698 [2024-11-05 10:36:20.540279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.698 [2024-11-05 10:36:20.540338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:54.698 [2024-11-05 10:36:20.540351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.698 [2024-11-05 10:36:20.540409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:54.698 [2024-11-05 10:36:20.540424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:54.698 #20 NEW cov: 12516 ft: 13655 corp: 5/254b lim: 90 exec/s: 0 rss: 73Mb L: 65/70 MS: 1 EraseBytes- 00:08:54.698 [2024-11-05 10:36:20.580385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:54.698 [2024-11-05 10:36:20.580413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.698 [2024-11-05 10:36:20.580471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:54.698 [2024-11-05 10:36:20.580485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.698 [2024-11-05 10:36:20.580542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:54.698 [2024-11-05 10:36:20.580558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:54.698 #21 NEW cov: 12516 ft: 13780 corp: 6/325b lim: 90 exec/s: 0 rss: 73Mb L: 71/71 MS: 1 InsertByte- 00:08:54.698 [2024-11-05 10:36:20.620444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:54.698 [2024-11-05 10:36:20.620472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.698 [2024-11-05 10:36:20.620527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:54.698 [2024-11-05 10:36:20.620542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.698 [2024-11-05 10:36:20.620601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:54.698 [2024-11-05 10:36:20.620618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:54.698 #22 NEW cov: 12516 ft: 13918 corp: 7/395b lim: 90 exec/s: 0 rss: 73Mb L: 70/71 MS: 1 ChangeBit- 00:08:54.698 [2024-11-05 10:36:20.660585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:54.698 [2024-11-05 10:36:20.660613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.698 [2024-11-05 10:36:20.660670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:54.698 [2024-11-05 10:36:20.660684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.699 [2024-11-05 10:36:20.660737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:54.699 [2024-11-05 10:36:20.660754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:54.699 #23 NEW cov: 12516 ft: 13990 corp: 8/465b lim: 90 exec/s: 0 rss: 73Mb L: 70/71 MS: 1 ChangeByte- 00:08:54.699 [2024-11-05 10:36:20.700717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:54.699 [2024-11-05 10:36:20.700745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.699 [2024-11-05 10:36:20.700815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:54.699 [2024-11-05 10:36:20.700832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.699 [2024-11-05 10:36:20.700892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:54.699 [2024-11-05 10:36:20.700909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:54.699 #24 NEW cov: 12516 ft: 14039 corp: 9/535b lim: 90 exec/s: 0 rss: 73Mb L: 70/71 MS: 1 ShuffleBytes- 00:08:54.699 [2024-11-05 10:36:20.760933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:54.699 [2024-11-05 10:36:20.760960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.699 [2024-11-05 10:36:20.761016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:54.699 [2024-11-05 10:36:20.761031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.699 [2024-11-05 10:36:20.761090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:54.699 [2024-11-05 10:36:20.761106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:54.957 #25 NEW cov: 12516 ft: 14155 corp: 10/605b lim: 90 exec/s: 0 rss: 73Mb L: 70/71 MS: 1 ChangeBinInt- 00:08:54.957 [2024-11-05 10:36:20.801167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:54.957 [2024-11-05 10:36:20.801195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.957 [2024-11-05 10:36:20.801253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:54.957 [2024-11-05 10:36:20.801268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.957 [2024-11-05 10:36:20.801324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:54.957 [2024-11-05 10:36:20.801340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:54.957 [2024-11-05 10:36:20.801399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:54.957 [2024-11-05 10:36:20.801416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:54.957 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:54.957 #26 NEW cov: 12539 ft: 14513 corp: 11/689b lim: 90 exec/s: 0 rss: 73Mb L: 84/84 MS: 1 InsertRepeatedBytes- 00:08:54.957 [2024-11-05 10:36:20.861184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:54.957 [2024-11-05 10:36:20.861213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.957 [2024-11-05 10:36:20.861271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:54.957 [2024-11-05 10:36:20.861284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.957 [2024-11-05 10:36:20.861341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:54.957 [2024-11-05 10:36:20.861357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:54.957 #27 NEW cov: 12539 ft: 14598 corp: 12/759b lim: 90 exec/s: 0 rss: 73Mb L: 70/84 MS: 1 ChangeByte- 00:08:54.957 [2024-11-05 10:36:20.901094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:54.957 [2024-11-05 10:36:20.901120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.957 [2024-11-05 10:36:20.901198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:54.957 [2024-11-05 10:36:20.901213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.957 #28 NEW cov: 12539 ft: 14649 corp: 13/807b lim: 90 exec/s: 28 rss: 73Mb L: 48/84 MS: 1 CopyPart- 00:08:54.957 [2024-11-05 10:36:20.961459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:54.957 [2024-11-05 10:36:20.961486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.957 [2024-11-05 10:36:20.961560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:54.957 [2024-11-05 10:36:20.961574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.957 [2024-11-05 10:36:20.961633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:54.957 [2024-11-05 10:36:20.961650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:54.957 #29 NEW cov: 12539 ft: 14734 corp: 14/877b lim: 90 exec/s: 29 rss: 73Mb L: 70/84 MS: 1 CMP- DE: "\001\006"- 00:08:54.957 [2024-11-05 10:36:21.001585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:54.957 [2024-11-05 10:36:21.001613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.957 [2024-11-05 10:36:21.001675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:54.957 [2024-11-05 10:36:21.001689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.957 [2024-11-05 10:36:21.001742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:54.957 [2024-11-05 10:36:21.001759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:54.957 #30 NEW cov: 12539 ft: 14754 corp: 15/947b lim: 90 exec/s: 30 rss: 73Mb L: 70/84 MS: 1 ChangeBit- 00:08:55.216 [2024-11-05 10:36:21.041678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:55.216 [2024-11-05 10:36:21.041705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.216 [2024-11-05 10:36:21.041761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:55.216 [2024-11-05 10:36:21.041777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.216 [2024-11-05 10:36:21.041829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:55.216 [2024-11-05 10:36:21.041846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:55.216 #31 NEW cov: 12539 ft: 14762 corp: 16/1017b lim: 90 exec/s: 31 rss: 74Mb L: 70/84 MS: 1 ChangeBit- 00:08:55.216 [2024-11-05 10:36:21.101883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:55.216 [2024-11-05 10:36:21.101911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.216 [2024-11-05 10:36:21.101968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:55.216 [2024-11-05 10:36:21.101981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.216 [2024-11-05 10:36:21.102037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:55.216 [2024-11-05 10:36:21.102054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:55.216 #32 NEW cov: 12539 ft: 14792 corp: 17/1079b lim: 90 exec/s: 32 rss: 74Mb L: 62/84 MS: 1 EraseBytes- 00:08:55.216 [2024-11-05 10:36:21.162025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:55.216 [2024-11-05 10:36:21.162054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.216 [2024-11-05 10:36:21.162112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:55.216 [2024-11-05 10:36:21.162126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.216 [2024-11-05 10:36:21.162185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:55.216 [2024-11-05 10:36:21.162200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:55.216 #33 NEW cov: 12539 ft: 14829 corp: 18/1149b lim: 90 exec/s: 33 rss: 74Mb L: 70/84 MS: 1 ChangeByte- 00:08:55.216 [2024-11-05 10:36:21.202354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:55.216 [2024-11-05 10:36:21.202382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.216 [2024-11-05 10:36:21.202435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:55.216 [2024-11-05 10:36:21.202453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.216 [2024-11-05 10:36:21.202511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:55.216 [2024-11-05 10:36:21.202527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:55.216 [2024-11-05 10:36:21.202584] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:55.216 [2024-11-05 10:36:21.202601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:55.216 #34 NEW cov: 12539 ft: 14836 corp: 19/1225b lim: 90 exec/s: 34 rss: 74Mb L: 76/84 MS: 1 InsertRepeatedBytes- 00:08:55.216 [2024-11-05 10:36:21.262497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:55.216 [2024-11-05 10:36:21.262526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.216 [2024-11-05 10:36:21.262591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:55.216 [2024-11-05 10:36:21.262607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.216 [2024-11-05 10:36:21.262664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:55.216 [2024-11-05 10:36:21.262682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:55.216 [2024-11-05 10:36:21.262736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:55.216 [2024-11-05 10:36:21.262753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:55.475 #35 NEW cov: 12539 ft: 14837 corp: 20/1301b lim: 90 exec/s: 35 rss: 74Mb L: 76/84 MS: 1 ChangeBinInt- 00:08:55.475 [2024-11-05 10:36:21.322684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:55.475 [2024-11-05 10:36:21.322718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.475 [2024-11-05 10:36:21.322773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:55.475 [2024-11-05 10:36:21.322790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.475 [2024-11-05 10:36:21.322849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:55.475 [2024-11-05 10:36:21.322866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:55.475 [2024-11-05 10:36:21.322930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:55.475 [2024-11-05 10:36:21.322947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:55.475 #36 NEW cov: 12539 ft: 14854 corp: 21/1379b lim: 90 exec/s: 36 rss: 74Mb L: 78/84 MS: 1 CMP- DE: "\001:q\327\015\305G\344"- 00:08:55.475 [2024-11-05 10:36:21.362442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:55.475 [2024-11-05 10:36:21.362472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.475 [2024-11-05 10:36:21.362535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:55.475 [2024-11-05 10:36:21.362551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.475 #37 NEW cov: 12539 ft: 14863 corp: 22/1427b lim: 90 exec/s: 37 rss: 74Mb L: 48/84 MS: 1 ChangeByte- 00:08:55.475 [2024-11-05 10:36:21.402939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:55.475 [2024-11-05 10:36:21.402968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.475 [2024-11-05 10:36:21.403019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:55.475 [2024-11-05 10:36:21.403037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.475 [2024-11-05 10:36:21.403092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:55.475 [2024-11-05 10:36:21.403110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:55.475 [2024-11-05 10:36:21.403167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:55.475 [2024-11-05 10:36:21.403184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:55.475 #38 NEW cov: 12539 ft: 14895 corp: 23/1506b lim: 90 exec/s: 38 rss: 74Mb L: 79/84 MS: 1 InsertRepeatedBytes- 00:08:55.475 [2024-11-05 10:36:21.442872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:55.475 [2024-11-05 10:36:21.442900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.475 [2024-11-05 10:36:21.442958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:55.475 [2024-11-05 10:36:21.442971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.475 [2024-11-05 10:36:21.443029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:55.475 [2024-11-05 10:36:21.443046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:55.475 #39 NEW cov: 12539 ft: 14933 corp: 24/1576b lim: 90 exec/s: 39 rss: 74Mb L: 70/84 MS: 1 CMP- DE: "\360\334\005}\322q:\000"- 00:08:55.475 [2024-11-05 10:36:21.503185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:55.475 [2024-11-05 10:36:21.503213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.475 [2024-11-05 10:36:21.503268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:55.475 [2024-11-05 10:36:21.503283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.475 [2024-11-05 10:36:21.503333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:55.475 [2024-11-05 10:36:21.503350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:55.475 [2024-11-05 10:36:21.503411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:55.475 [2024-11-05 10:36:21.503428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:55.475 #40 NEW cov: 12539 ft: 14944 corp: 25/1656b lim: 90 exec/s: 40 rss: 74Mb L: 80/84 MS: 1 InsertByte- 00:08:55.733 [2024-11-05 10:36:21.563231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:55.733 [2024-11-05 10:36:21.563260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.733 [2024-11-05 10:36:21.563319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:55.733 [2024-11-05 10:36:21.563336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.733 [2024-11-05 10:36:21.563393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:55.733 [2024-11-05 10:36:21.563410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:55.733 #41 NEW cov: 12539 ft: 14951 corp: 26/1726b lim: 90 exec/s: 41 rss: 74Mb L: 70/84 MS: 1 ChangeBit- 00:08:55.733 [2024-11-05 10:36:21.623383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:55.733 [2024-11-05 10:36:21.623411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.733 [2024-11-05 10:36:21.623470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:55.733 [2024-11-05 10:36:21.623484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.733 [2024-11-05 10:36:21.623542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:55.733 [2024-11-05 10:36:21.623557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:55.733 #42 NEW cov: 12539 ft: 14981 corp: 27/1796b lim: 90 exec/s: 42 rss: 74Mb L: 70/84 MS: 1 CMP- DE: "\004\000\000\000"- 00:08:55.733 [2024-11-05 10:36:21.663693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:55.733 [2024-11-05 10:36:21.663730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.733 [2024-11-05 10:36:21.663782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:55.733 [2024-11-05 10:36:21.663798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.733 [2024-11-05 10:36:21.663840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:55.733 [2024-11-05 10:36:21.663858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:55.733 [2024-11-05 10:36:21.663915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:55.733 [2024-11-05 10:36:21.663930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:55.733 #43 NEW cov: 12539 ft: 14985 corp: 28/1885b lim: 90 exec/s: 43 rss: 74Mb L: 89/89 MS: 1 InsertRepeatedBytes- 00:08:55.733 [2024-11-05 10:36:21.703609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:55.733 [2024-11-05 10:36:21.703637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.733 [2024-11-05 10:36:21.703692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:55.733 [2024-11-05 10:36:21.703708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.733 [2024-11-05 10:36:21.703766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:55.733 [2024-11-05 10:36:21.703784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:55.734 #44 NEW cov: 12539 ft: 14995 corp: 29/1955b lim: 90 exec/s: 44 rss: 74Mb L: 70/89 MS: 1 ShuffleBytes- 00:08:55.734 [2024-11-05 10:36:21.743574] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:55.734 [2024-11-05 10:36:21.743601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.734 [2024-11-05 10:36:21.743660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:55.734 [2024-11-05 10:36:21.743676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.734 #45 NEW cov: 12539 ft: 15013 corp: 30/2003b lim: 90 exec/s: 45 rss: 74Mb L: 48/89 MS: 1 ChangeBit- 00:08:55.734 [2024-11-05 10:36:21.803899] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:55.734 [2024-11-05 10:36:21.803927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.734 [2024-11-05 10:36:21.803985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:55.734 [2024-11-05 10:36:21.803999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.734 [2024-11-05 10:36:21.804056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:55.734 [2024-11-05 10:36:21.804072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:55.992 #46 NEW cov: 12539 ft: 15020 corp: 31/2067b lim: 90 exec/s: 46 rss: 74Mb L: 64/89 MS: 1 EraseBytes- 00:08:55.992 [2024-11-05 10:36:21.844217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:55.992 [2024-11-05 10:36:21.844245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.992 [2024-11-05 10:36:21.844297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:55.992 [2024-11-05 10:36:21.844314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.992 [2024-11-05 10:36:21.844355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:55.992 [2024-11-05 10:36:21.844372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:55.992 [2024-11-05 10:36:21.844450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:55.992 [2024-11-05 10:36:21.844466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:55.992 #47 NEW cov: 12539 ft: 15060 corp: 32/2151b lim: 90 exec/s: 47 rss: 74Mb L: 84/89 MS: 1 ChangeBinInt- 00:08:55.992 [2024-11-05 10:36:21.904054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:55.992 [2024-11-05 10:36:21.904081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.992 [2024-11-05 10:36:21.904149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:55.992 [2024-11-05 10:36:21.904167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.992 #48 NEW cov: 12539 ft: 15077 corp: 33/2201b lim: 90 exec/s: 24 rss: 74Mb L: 50/89 MS: 1 EraseBytes- 00:08:55.992 #48 DONE cov: 12539 ft: 15077 corp: 33/2201b lim: 90 exec/s: 24 rss: 74Mb 00:08:55.992 ###### Recommended dictionary. ###### 00:08:55.992 "\001\006" # Uses: 0 00:08:55.992 "\001:q\327\015\305G\344" # Uses: 0 00:08:55.992 "\360\334\005}\322q:\000" # Uses: 0 00:08:55.992 "\004\000\000\000" # Uses: 0 00:08:55.992 ###### End of recommended dictionary. ###### 00:08:55.992 Done 48 runs in 2 second(s) 00:08:56.251 10:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:08:56.251 10:36:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:56.251 10:36:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:56.251 10:36:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:08:56.251 10:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:08:56.251 10:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:56.251 10:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:56.251 10:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:56.251 10:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:08:56.251 10:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:56.251 10:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:56.251 10:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:08:56.251 10:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:08:56.251 10:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:56.251 10:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:08:56.251 10:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:56.251 10:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:56.251 10:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:56.251 10:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:08:56.251 [2024-11-05 10:36:22.115028] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:56.251 [2024-11-05 10:36:22.115099] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2868217 ] 00:08:56.509 [2024-11-05 10:36:22.380782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.509 [2024-11-05 10:36:22.429543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.509 [2024-11-05 10:36:22.493384] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.509 [2024-11-05 10:36:22.509605] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:08:56.509 INFO: Running with entropic power schedule (0xFF, 100). 00:08:56.509 INFO: Seed: 1158253938 00:08:56.509 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:56.509 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:56.509 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:56.509 INFO: A corpus is not provided, starting from an empty corpus 00:08:56.509 #2 INITED exec/s: 0 rss: 66Mb 00:08:56.509 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:56.509 This may also happen if the target rejected all inputs we tried so far 00:08:56.509 [2024-11-05 10:36:22.555310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:56.509 [2024-11-05 10:36:22.555341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:56.509 [2024-11-05 10:36:22.555396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:56.509 [2024-11-05 10:36:22.555412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.025 NEW_FUNC[1/717]: 0x461128 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:08:57.025 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:57.025 #7 NEW cov: 12287 ft: 12284 corp: 2/21b lim: 50 exec/s: 0 rss: 73Mb L: 20/20 MS: 5 CrossOver-ShuffleBytes-CopyPart-EraseBytes-InsertRepeatedBytes- 00:08:57.025 [2024-11-05 10:36:22.876204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:57.025 [2024-11-05 10:36:22.876241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.025 [2024-11-05 10:36:22.876299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:57.025 [2024-11-05 10:36:22.876316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.025 #8 NEW cov: 12400 ft: 12722 corp: 3/41b lim: 50 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 ChangeByte- 00:08:57.025 [2024-11-05 10:36:22.936110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:57.025 [2024-11-05 10:36:22.936139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.025 #14 NEW cov: 12406 ft: 13719 corp: 4/55b lim: 50 exec/s: 0 rss: 73Mb L: 14/20 MS: 1 EraseBytes- 00:08:57.025 [2024-11-05 10:36:22.996571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:57.025 [2024-11-05 10:36:22.996600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.025 [2024-11-05 10:36:22.996662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:57.025 [2024-11-05 10:36:22.996676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.025 [2024-11-05 10:36:22.996729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:57.025 [2024-11-05 10:36:22.996746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:57.025 #15 NEW cov: 12491 ft: 14259 corp: 5/89b lim: 50 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:08:57.025 [2024-11-05 10:36:23.036527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:57.025 [2024-11-05 10:36:23.036555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.025 [2024-11-05 10:36:23.036618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:57.025 [2024-11-05 10:36:23.036635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.025 #16 NEW cov: 12491 ft: 14477 corp: 6/110b lim: 50 exec/s: 0 rss: 73Mb L: 21/34 MS: 1 InsertByte- 00:08:57.025 [2024-11-05 10:36:23.076449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:57.025 [2024-11-05 10:36:23.076477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.283 #17 NEW cov: 12491 ft: 14574 corp: 7/124b lim: 50 exec/s: 0 rss: 73Mb L: 14/34 MS: 1 ChangeBit- 00:08:57.283 [2024-11-05 10:36:23.137001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:57.283 [2024-11-05 10:36:23.137029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.283 [2024-11-05 10:36:23.137088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:57.283 [2024-11-05 10:36:23.137101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.283 [2024-11-05 10:36:23.137162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:57.283 [2024-11-05 10:36:23.137178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:57.283 #18 NEW cov: 12491 ft: 14664 corp: 8/159b lim: 50 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:08:57.283 [2024-11-05 10:36:23.177100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:57.283 [2024-11-05 10:36:23.177128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.283 [2024-11-05 10:36:23.177186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:57.283 [2024-11-05 10:36:23.177200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.283 [2024-11-05 10:36:23.177259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:57.283 [2024-11-05 10:36:23.177276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:57.283 #19 NEW cov: 12491 ft: 14853 corp: 9/193b lim: 50 exec/s: 0 rss: 73Mb L: 34/35 MS: 1 ChangeByte- 00:08:57.283 [2024-11-05 10:36:23.237253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:57.283 [2024-11-05 10:36:23.237282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.283 [2024-11-05 10:36:23.237341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:57.283 [2024-11-05 10:36:23.237355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.283 [2024-11-05 10:36:23.237415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:57.283 [2024-11-05 10:36:23.237432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:57.283 #20 NEW cov: 12491 ft: 14881 corp: 10/227b lim: 50 exec/s: 0 rss: 73Mb L: 34/35 MS: 1 ChangeByte- 00:08:57.284 [2024-11-05 10:36:23.277369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:57.284 [2024-11-05 10:36:23.277396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.284 [2024-11-05 10:36:23.277454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:57.284 [2024-11-05 10:36:23.277468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.284 [2024-11-05 10:36:23.277525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:57.284 [2024-11-05 10:36:23.277542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:57.284 #21 NEW cov: 12491 ft: 14934 corp: 11/265b lim: 50 exec/s: 0 rss: 73Mb L: 38/38 MS: 1 CopyPart- 00:08:57.284 [2024-11-05 10:36:23.337554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:57.284 [2024-11-05 10:36:23.337583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.284 [2024-11-05 10:36:23.337642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:57.284 [2024-11-05 10:36:23.337656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.284 [2024-11-05 10:36:23.337717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:57.284 [2024-11-05 10:36:23.337737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:57.542 #22 NEW cov: 12491 ft: 14956 corp: 12/300b lim: 50 exec/s: 0 rss: 73Mb L: 35/38 MS: 1 ChangeByte- 00:08:57.542 [2024-11-05 10:36:23.397335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:57.542 [2024-11-05 10:36:23.397360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.542 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:57.542 #23 NEW cov: 12514 ft: 15019 corp: 13/314b lim: 50 exec/s: 0 rss: 73Mb L: 14/38 MS: 1 ChangeByte- 00:08:57.542 [2024-11-05 10:36:23.457859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:57.542 [2024-11-05 10:36:23.457888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.542 [2024-11-05 10:36:23.457948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:57.542 [2024-11-05 10:36:23.457962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.542 [2024-11-05 10:36:23.458023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:57.542 [2024-11-05 10:36:23.458040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:57.542 #24 NEW cov: 12514 ft: 15038 corp: 14/348b lim: 50 exec/s: 0 rss: 73Mb L: 34/38 MS: 1 ShuffleBytes- 00:08:57.542 [2024-11-05 10:36:23.498038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:57.542 [2024-11-05 10:36:23.498066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.542 [2024-11-05 10:36:23.498124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:57.542 [2024-11-05 10:36:23.498138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.542 [2024-11-05 10:36:23.498198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:57.542 [2024-11-05 10:36:23.498214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:57.542 #25 NEW cov: 12514 ft: 15059 corp: 15/383b lim: 50 exec/s: 0 rss: 73Mb L: 35/38 MS: 1 InsertByte- 00:08:57.542 [2024-11-05 10:36:23.537955] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:57.542 [2024-11-05 10:36:23.537984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.542 [2024-11-05 10:36:23.538046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:57.542 [2024-11-05 10:36:23.538062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.542 #26 NEW cov: 12514 ft: 15082 corp: 16/403b lim: 50 exec/s: 26 rss: 73Mb L: 20/38 MS: 1 ShuffleBytes- 00:08:57.542 [2024-11-05 10:36:23.578477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:57.542 [2024-11-05 10:36:23.578507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.542 [2024-11-05 10:36:23.578562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:57.542 [2024-11-05 10:36:23.578578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.542 [2024-11-05 10:36:23.578638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:57.542 [2024-11-05 10:36:23.578659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:57.542 [2024-11-05 10:36:23.578721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:57.542 [2024-11-05 10:36:23.578738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:57.542 #27 NEW cov: 12514 ft: 15401 corp: 17/443b lim: 50 exec/s: 27 rss: 73Mb L: 40/40 MS: 1 CrossOver- 00:08:57.542 [2024-11-05 10:36:23.618400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:57.542 [2024-11-05 10:36:23.618429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.542 [2024-11-05 10:36:23.618491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:57.542 [2024-11-05 10:36:23.618506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.542 [2024-11-05 10:36:23.618566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:57.542 [2024-11-05 10:36:23.618583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:57.800 #28 NEW cov: 12514 ft: 15438 corp: 18/482b lim: 50 exec/s: 28 rss: 74Mb L: 39/40 MS: 1 InsertByte- 00:08:57.800 [2024-11-05 10:36:23.678377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:57.800 [2024-11-05 10:36:23.678403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.800 [2024-11-05 10:36:23.678465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:57.800 [2024-11-05 10:36:23.678482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.800 #29 NEW cov: 12514 ft: 15457 corp: 19/503b lim: 50 exec/s: 29 rss: 74Mb L: 21/40 MS: 1 CrossOver- 00:08:57.800 [2024-11-05 10:36:23.738723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:57.800 [2024-11-05 10:36:23.738751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.800 [2024-11-05 10:36:23.738806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:57.800 [2024-11-05 10:36:23.738822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.800 [2024-11-05 10:36:23.738883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:57.800 [2024-11-05 10:36:23.738900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:57.800 #30 NEW cov: 12514 ft: 15511 corp: 20/541b lim: 50 exec/s: 30 rss: 74Mb L: 38/40 MS: 1 ShuffleBytes- 00:08:57.800 [2024-11-05 10:36:23.779044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:57.800 [2024-11-05 10:36:23.779072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.800 [2024-11-05 10:36:23.779128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:57.800 [2024-11-05 10:36:23.779144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.800 [2024-11-05 10:36:23.779203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:57.800 [2024-11-05 10:36:23.779220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:57.800 [2024-11-05 10:36:23.779285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:57.800 [2024-11-05 10:36:23.779301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:57.800 #31 NEW cov: 12514 ft: 15520 corp: 21/582b lim: 50 exec/s: 31 rss: 74Mb L: 41/41 MS: 1 InsertByte- 00:08:57.800 [2024-11-05 10:36:23.838656] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:57.800 [2024-11-05 10:36:23.838682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.800 #33 NEW cov: 12514 ft: 15530 corp: 22/595b lim: 50 exec/s: 33 rss: 74Mb L: 13/41 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:58.059 [2024-11-05 10:36:23.879504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:58.059 [2024-11-05 10:36:23.879533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.059 [2024-11-05 10:36:23.879585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:58.059 [2024-11-05 10:36:23.879601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:58.059 [2024-11-05 10:36:23.879646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:58.059 [2024-11-05 10:36:23.879664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:58.059 [2024-11-05 10:36:23.879723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:58.059 [2024-11-05 10:36:23.879739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:58.059 [2024-11-05 10:36:23.879802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:58.059 [2024-11-05 10:36:23.879819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:58.059 #34 NEW cov: 12514 ft: 15616 corp: 23/645b lim: 50 exec/s: 34 rss: 74Mb L: 50/50 MS: 1 CopyPart- 00:08:58.059 [2024-11-05 10:36:23.938986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:58.059 [2024-11-05 10:36:23.939014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.059 #35 NEW cov: 12514 ft: 15639 corp: 24/660b lim: 50 exec/s: 35 rss: 74Mb L: 15/50 MS: 1 InsertByte- 00:08:58.059 [2024-11-05 10:36:23.999521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:58.059 [2024-11-05 10:36:23.999551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.059 [2024-11-05 10:36:23.999611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:58.059 [2024-11-05 10:36:23.999625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:58.059 [2024-11-05 10:36:23.999685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:58.059 [2024-11-05 10:36:23.999702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:58.059 [2024-11-05 10:36:24.039478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:58.059 [2024-11-05 10:36:24.039510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.059 [2024-11-05 10:36:24.039567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:58.059 [2024-11-05 10:36:24.039587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:58.059 #37 NEW cov: 12514 ft: 15665 corp: 25/684b lim: 50 exec/s: 37 rss: 74Mb L: 24/50 MS: 2 ChangeBit-EraseBytes- 00:08:58.059 [2024-11-05 10:36:24.079745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:58.059 [2024-11-05 10:36:24.079775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.059 [2024-11-05 10:36:24.079830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:58.059 [2024-11-05 10:36:24.079848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:58.059 [2024-11-05 10:36:24.079908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:58.059 [2024-11-05 10:36:24.079925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:58.059 #38 NEW cov: 12514 ft: 15728 corp: 26/719b lim: 50 exec/s: 38 rss: 74Mb L: 35/50 MS: 1 ChangeBit- 00:08:58.317 [2024-11-05 10:36:24.139929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:58.317 [2024-11-05 10:36:24.139957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.317 [2024-11-05 10:36:24.140018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:58.317 [2024-11-05 10:36:24.140031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:58.317 [2024-11-05 10:36:24.140092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:58.317 [2024-11-05 10:36:24.140108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:58.317 #39 NEW cov: 12514 ft: 15754 corp: 27/753b lim: 50 exec/s: 39 rss: 74Mb L: 34/50 MS: 1 CrossOver- 00:08:58.317 [2024-11-05 10:36:24.200119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:58.317 [2024-11-05 10:36:24.200149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.317 [2024-11-05 10:36:24.200210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:58.317 [2024-11-05 10:36:24.200224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:58.317 [2024-11-05 10:36:24.200284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:58.317 [2024-11-05 10:36:24.200301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:58.317 #40 NEW cov: 12514 ft: 15780 corp: 28/787b lim: 50 exec/s: 40 rss: 74Mb L: 34/50 MS: 1 CopyPart- 00:08:58.317 [2024-11-05 10:36:24.239998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:58.317 [2024-11-05 10:36:24.240026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.317 [2024-11-05 10:36:24.240088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:58.317 [2024-11-05 10:36:24.240101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:58.317 #41 NEW cov: 12514 ft: 15842 corp: 29/807b lim: 50 exec/s: 41 rss: 74Mb L: 20/50 MS: 1 ChangeBinInt- 00:08:58.317 [2024-11-05 10:36:24.300385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:58.317 [2024-11-05 10:36:24.300416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.317 [2024-11-05 10:36:24.300476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:58.317 [2024-11-05 10:36:24.300489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:58.317 [2024-11-05 10:36:24.300551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:58.317 [2024-11-05 10:36:24.300567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:58.317 #42 NEW cov: 12514 ft: 15857 corp: 30/843b lim: 50 exec/s: 42 rss: 74Mb L: 36/50 MS: 1 InsertByte- 00:08:58.317 [2024-11-05 10:36:24.360444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:58.317 [2024-11-05 10:36:24.360472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.317 [2024-11-05 10:36:24.360529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:58.317 [2024-11-05 10:36:24.360546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:58.576 #43 NEW cov: 12514 ft: 15883 corp: 31/863b lim: 50 exec/s: 43 rss: 74Mb L: 20/50 MS: 1 CrossOver- 00:08:58.576 [2024-11-05 10:36:24.420567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:58.576 [2024-11-05 10:36:24.420595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.576 [2024-11-05 10:36:24.420657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:58.576 [2024-11-05 10:36:24.420675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:58.576 #44 NEW cov: 12514 ft: 15938 corp: 32/883b lim: 50 exec/s: 44 rss: 74Mb L: 20/50 MS: 1 ChangeBit- 00:08:58.576 [2024-11-05 10:36:24.460484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:58.576 [2024-11-05 10:36:24.460510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.576 #45 NEW cov: 12514 ft: 15989 corp: 33/897b lim: 50 exec/s: 45 rss: 74Mb L: 14/50 MS: 1 CrossOver- 00:08:58.576 [2024-11-05 10:36:24.501003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:58.576 [2024-11-05 10:36:24.501031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.576 [2024-11-05 10:36:24.501092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:58.576 [2024-11-05 10:36:24.501106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:58.576 [2024-11-05 10:36:24.501164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:58.576 [2024-11-05 10:36:24.501181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:58.576 #46 NEW cov: 12514 ft: 15990 corp: 34/932b lim: 50 exec/s: 23 rss: 74Mb L: 35/50 MS: 1 CMP- DE: "\000\000\000\000"- 00:08:58.576 #46 DONE cov: 12514 ft: 15990 corp: 34/932b lim: 50 exec/s: 23 rss: 74Mb 00:08:58.576 ###### Recommended dictionary. ###### 00:08:58.576 "\000\000\000\000" # Uses: 0 00:08:58.576 ###### End of recommended dictionary. ###### 00:08:58.576 Done 46 runs in 2 second(s) 00:08:58.834 10:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:08:58.834 10:36:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:58.834 10:36:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:58.834 10:36:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:08:58.834 10:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:08:58.834 10:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:58.834 10:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:58.834 10:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:58.834 10:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:08:58.834 10:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:58.834 10:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:58.834 10:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:08:58.834 10:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:08:58.834 10:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:58.834 10:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:08:58.834 10:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:58.834 10:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:58.834 10:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:58.834 10:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:08:58.834 [2024-11-05 10:36:24.696007] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:08:58.834 [2024-11-05 10:36:24.696063] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2868580 ] 00:08:59.093 [2024-11-05 10:36:24.943913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.093 [2024-11-05 10:36:24.991705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.093 [2024-11-05 10:36:25.055620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.093 [2024-11-05 10:36:25.071850] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:08:59.093 INFO: Running with entropic power schedule (0xFF, 100). 00:08:59.093 INFO: Seed: 3720259663 00:08:59.093 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:08:59.093 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:08:59.093 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:59.093 INFO: A corpus is not provided, starting from an empty corpus 00:08:59.093 #2 INITED exec/s: 0 rss: 66Mb 00:08:59.093 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:59.093 This may also happen if the target rejected all inputs we tried so far 00:08:59.093 [2024-11-05 10:36:25.137487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:59.093 [2024-11-05 10:36:25.137527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.609 NEW_FUNC[1/717]: 0x4633f8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:08:59.609 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:59.609 #6 NEW cov: 12292 ft: 12294 corp: 2/18b lim: 85 exec/s: 0 rss: 73Mb L: 17/17 MS: 4 InsertRepeatedBytes-ChangeBit-CopyPart-CopyPart- 00:08:59.609 [2024-11-05 10:36:25.629024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:59.609 [2024-11-05 10:36:25.629088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.609 [2024-11-05 10:36:25.629172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:59.609 [2024-11-05 10:36:25.629204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.609 #9 NEW cov: 12425 ft: 13720 corp: 3/53b lim: 85 exec/s: 0 rss: 73Mb L: 35/35 MS: 3 ChangeByte-ChangeBit-InsertRepeatedBytes- 00:08:59.867 [2024-11-05 10:36:25.688929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:59.867 [2024-11-05 10:36:25.688968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.867 [2024-11-05 10:36:25.689031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:59.867 [2024-11-05 10:36:25.689053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.867 #10 NEW cov: 12431 ft: 13872 corp: 4/88b lim: 85 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ShuffleBytes- 00:08:59.867 [2024-11-05 10:36:25.769153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:59.867 [2024-11-05 10:36:25.769190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.867 [2024-11-05 10:36:25.769223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:59.867 [2024-11-05 10:36:25.769251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.867 #11 NEW cov: 12516 ft: 14180 corp: 5/127b lim: 85 exec/s: 0 rss: 73Mb L: 39/39 MS: 1 CopyPart- 00:08:59.867 [2024-11-05 10:36:25.819281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:59.867 [2024-11-05 10:36:25.819318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.867 [2024-11-05 10:36:25.819364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:59.867 [2024-11-05 10:36:25.819393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.867 #17 NEW cov: 12516 ft: 14251 corp: 6/162b lim: 85 exec/s: 0 rss: 73Mb L: 35/39 MS: 1 ChangeBinInt- 00:08:59.867 [2024-11-05 10:36:25.869383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:59.867 [2024-11-05 10:36:25.869418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.867 [2024-11-05 10:36:25.869465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:59.867 [2024-11-05 10:36:25.869488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.867 #18 NEW cov: 12516 ft: 14277 corp: 7/197b lim: 85 exec/s: 0 rss: 73Mb L: 35/39 MS: 1 CrossOver- 00:08:59.867 [2024-11-05 10:36:25.919575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:59.867 [2024-11-05 10:36:25.919610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.867 [2024-11-05 10:36:25.919675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:59.867 [2024-11-05 10:36:25.919701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.126 #19 NEW cov: 12516 ft: 14413 corp: 8/236b lim: 85 exec/s: 0 rss: 73Mb L: 39/39 MS: 1 CopyPart- 00:09:00.126 [2024-11-05 10:36:25.999754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:00.126 [2024-11-05 10:36:25.999789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.126 [2024-11-05 10:36:25.999839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:00.126 [2024-11-05 10:36:25.999859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.126 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:00.126 #20 NEW cov: 12539 ft: 14468 corp: 9/275b lim: 85 exec/s: 0 rss: 73Mb L: 39/39 MS: 1 ChangeBit- 00:09:00.126 [2024-11-05 10:36:26.049926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:00.126 [2024-11-05 10:36:26.049963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.126 [2024-11-05 10:36:26.050014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:00.126 [2024-11-05 10:36:26.050037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.126 #21 NEW cov: 12539 ft: 14499 corp: 10/310b lim: 85 exec/s: 0 rss: 74Mb L: 35/39 MS: 1 ChangeBit- 00:09:00.126 [2024-11-05 10:36:26.130155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:00.126 [2024-11-05 10:36:26.130191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.126 [2024-11-05 10:36:26.130237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:00.126 [2024-11-05 10:36:26.130259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.126 #22 NEW cov: 12539 ft: 14559 corp: 11/349b lim: 85 exec/s: 22 rss: 74Mb L: 39/39 MS: 1 ChangeByte- 00:09:00.126 [2024-11-05 10:36:26.180279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:00.126 [2024-11-05 10:36:26.180315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.126 [2024-11-05 10:36:26.180361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:00.126 [2024-11-05 10:36:26.180383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.384 #23 NEW cov: 12539 ft: 14565 corp: 12/385b lim: 85 exec/s: 23 rss: 74Mb L: 36/39 MS: 1 InsertByte- 00:09:00.384 [2024-11-05 10:36:26.260500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:00.384 [2024-11-05 10:36:26.260538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.384 [2024-11-05 10:36:26.260597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:00.384 [2024-11-05 10:36:26.260619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.384 #24 NEW cov: 12539 ft: 14585 corp: 13/421b lim: 85 exec/s: 24 rss: 74Mb L: 36/39 MS: 1 CrossOver- 00:09:00.384 [2024-11-05 10:36:26.310622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:00.384 [2024-11-05 10:36:26.310658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.384 [2024-11-05 10:36:26.310721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:00.384 [2024-11-05 10:36:26.310744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.384 #25 NEW cov: 12539 ft: 14665 corp: 14/456b lim: 85 exec/s: 25 rss: 74Mb L: 35/39 MS: 1 ShuffleBytes- 00:09:00.384 [2024-11-05 10:36:26.390880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:00.384 [2024-11-05 10:36:26.390918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.384 [2024-11-05 10:36:26.390977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:00.384 [2024-11-05 10:36:26.390999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.384 #26 NEW cov: 12539 ft: 14675 corp: 15/491b lim: 85 exec/s: 26 rss: 74Mb L: 35/39 MS: 1 ChangeBit- 00:09:00.642 [2024-11-05 10:36:26.470898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:00.642 [2024-11-05 10:36:26.470935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.642 #27 NEW cov: 12539 ft: 14712 corp: 16/518b lim: 85 exec/s: 27 rss: 74Mb L: 27/39 MS: 1 EraseBytes- 00:09:00.642 [2024-11-05 10:36:26.551275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:00.642 [2024-11-05 10:36:26.551313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.642 [2024-11-05 10:36:26.551373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:00.642 [2024-11-05 10:36:26.551395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.642 #28 NEW cov: 12539 ft: 14797 corp: 17/554b lim: 85 exec/s: 28 rss: 74Mb L: 36/39 MS: 1 CrossOver- 00:09:00.642 [2024-11-05 10:36:26.631346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:00.642 [2024-11-05 10:36:26.631383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.642 #29 NEW cov: 12539 ft: 14807 corp: 18/571b lim: 85 exec/s: 29 rss: 74Mb L: 17/39 MS: 1 ChangeByte- 00:09:00.642 [2024-11-05 10:36:26.711718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:00.642 [2024-11-05 10:36:26.711755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.642 [2024-11-05 10:36:26.711804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:00.642 [2024-11-05 10:36:26.711826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.901 #30 NEW cov: 12539 ft: 14881 corp: 19/610b lim: 85 exec/s: 30 rss: 74Mb L: 39/39 MS: 1 ShuffleBytes- 00:09:00.901 [2024-11-05 10:36:26.791761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:00.901 [2024-11-05 10:36:26.791798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.901 #31 NEW cov: 12539 ft: 14926 corp: 20/635b lim: 85 exec/s: 31 rss: 74Mb L: 25/39 MS: 1 EraseBytes- 00:09:00.901 [2024-11-05 10:36:26.872026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:00.901 [2024-11-05 10:36:26.872065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.901 #32 NEW cov: 12539 ft: 14943 corp: 21/663b lim: 85 exec/s: 32 rss: 74Mb L: 28/39 MS: 1 EraseBytes- 00:09:00.901 [2024-11-05 10:36:26.952408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:00.901 [2024-11-05 10:36:26.952445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.901 [2024-11-05 10:36:26.952490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:00.901 [2024-11-05 10:36:26.952514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.165 #33 NEW cov: 12539 ft: 14965 corp: 22/698b lim: 85 exec/s: 33 rss: 74Mb L: 35/39 MS: 1 ChangeBit- 00:09:01.165 [2024-11-05 10:36:27.002552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:01.165 [2024-11-05 10:36:27.002589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.165 [2024-11-05 10:36:27.002650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:01.165 [2024-11-05 10:36:27.002673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.165 #34 NEW cov: 12539 ft: 14973 corp: 23/733b lim: 85 exec/s: 34 rss: 74Mb L: 35/39 MS: 1 ChangeByte- 00:09:01.165 [2024-11-05 10:36:27.052520] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:01.165 [2024-11-05 10:36:27.052556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.165 #35 NEW cov: 12539 ft: 14990 corp: 24/750b lim: 85 exec/s: 35 rss: 74Mb L: 17/39 MS: 1 ChangeBit- 00:09:01.165 [2024-11-05 10:36:27.102852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:01.165 [2024-11-05 10:36:27.102888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.165 [2024-11-05 10:36:27.102934] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:01.165 [2024-11-05 10:36:27.102955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.165 #36 NEW cov: 12539 ft: 14997 corp: 25/786b lim: 85 exec/s: 18 rss: 74Mb L: 36/39 MS: 1 ShuffleBytes- 00:09:01.165 #36 DONE cov: 12539 ft: 14997 corp: 25/786b lim: 85 exec/s: 18 rss: 74Mb 00:09:01.165 Done 36 runs in 2 second(s) 00:09:01.423 10:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:09:01.424 10:36:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:01.424 10:36:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:01.424 10:36:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:09:01.424 10:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:09:01.424 10:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:01.424 10:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:01.424 10:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:01.424 10:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:09:01.424 10:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:01.424 10:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:01.424 10:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:09:01.424 10:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:09:01.424 10:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:01.424 10:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:09:01.424 10:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:01.424 10:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:01.424 10:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:01.424 10:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:09:01.424 [2024-11-05 10:36:27.306094] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:09:01.424 [2024-11-05 10:36:27.306167] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2868931 ] 00:09:01.683 [2024-11-05 10:36:27.574391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.683 [2024-11-05 10:36:27.622859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.683 [2024-11-05 10:36:27.686804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.683 [2024-11-05 10:36:27.703030] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:09:01.683 INFO: Running with entropic power schedule (0xFF, 100). 00:09:01.683 INFO: Seed: 2055286565 00:09:01.683 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:09:01.683 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:09:01.683 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:01.683 INFO: A corpus is not provided, starting from an empty corpus 00:09:01.683 #2 INITED exec/s: 0 rss: 66Mb 00:09:01.683 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:01.683 This may also happen if the target rejected all inputs we tried so far 00:09:01.683 [2024-11-05 10:36:27.752344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:01.683 [2024-11-05 10:36:27.752375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.199 NEW_FUNC[1/716]: 0x466638 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:09:02.199 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:02.199 #10 NEW cov: 12246 ft: 12243 corp: 2/9b lim: 25 exec/s: 0 rss: 73Mb L: 8/8 MS: 3 ChangeBinInt-ShuffleBytes-InsertRepeatedBytes- 00:09:02.199 [2024-11-05 10:36:28.213579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:02.199 [2024-11-05 10:36:28.213618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.199 #14 NEW cov: 12359 ft: 12777 corp: 3/14b lim: 25 exec/s: 0 rss: 73Mb L: 5/8 MS: 4 InsertByte-EraseBytes-InsertRepeatedBytes-CrossOver- 00:09:02.199 [2024-11-05 10:36:28.253570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:02.199 [2024-11-05 10:36:28.253600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.458 #15 NEW cov: 12365 ft: 13142 corp: 4/19b lim: 25 exec/s: 0 rss: 73Mb L: 5/8 MS: 1 ChangeByte- 00:09:02.458 [2024-11-05 10:36:28.313748] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:02.458 [2024-11-05 10:36:28.313777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.458 #16 NEW cov: 12450 ft: 13465 corp: 5/24b lim: 25 exec/s: 0 rss: 73Mb L: 5/8 MS: 1 ChangeBit- 00:09:02.458 [2024-11-05 10:36:28.354133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:02.458 [2024-11-05 10:36:28.354162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.458 [2024-11-05 10:36:28.354220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:02.458 [2024-11-05 10:36:28.354234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.458 [2024-11-05 10:36:28.354291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:02.458 [2024-11-05 10:36:28.354308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.458 #19 NEW cov: 12450 ft: 13953 corp: 6/41b lim: 25 exec/s: 0 rss: 73Mb L: 17/17 MS: 3 ChangeBit-ChangeByte-InsertRepeatedBytes- 00:09:02.458 [2024-11-05 10:36:28.394240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:02.458 [2024-11-05 10:36:28.394268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.458 [2024-11-05 10:36:28.394323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:02.458 [2024-11-05 10:36:28.394339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.458 [2024-11-05 10:36:28.394396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:02.458 [2024-11-05 10:36:28.394413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.458 #20 NEW cov: 12450 ft: 13992 corp: 7/59b lim: 25 exec/s: 0 rss: 73Mb L: 18/18 MS: 1 InsertByte- 00:09:02.458 [2024-11-05 10:36:28.454572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:02.458 [2024-11-05 10:36:28.454603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.458 [2024-11-05 10:36:28.454653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:02.458 [2024-11-05 10:36:28.454671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.458 [2024-11-05 10:36:28.454720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:02.458 [2024-11-05 10:36:28.454738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.458 [2024-11-05 10:36:28.454799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:02.458 [2024-11-05 10:36:28.454815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.458 #21 NEW cov: 12450 ft: 14520 corp: 8/83b lim: 25 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:09:02.458 [2024-11-05 10:36:28.514706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:02.458 [2024-11-05 10:36:28.514738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.458 [2024-11-05 10:36:28.514794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:02.458 [2024-11-05 10:36:28.514811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.458 [2024-11-05 10:36:28.514834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:02.458 [2024-11-05 10:36:28.514852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.458 [2024-11-05 10:36:28.514909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:02.458 [2024-11-05 10:36:28.514926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.716 #22 NEW cov: 12450 ft: 14545 corp: 9/107b lim: 25 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 CopyPart- 00:09:02.716 [2024-11-05 10:36:28.574452] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:02.716 [2024-11-05 10:36:28.574478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.716 #23 NEW cov: 12450 ft: 14581 corp: 10/115b lim: 25 exec/s: 0 rss: 73Mb L: 8/24 MS: 1 ChangeBinInt- 00:09:02.716 [2024-11-05 10:36:28.635060] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:02.716 [2024-11-05 10:36:28.635087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.716 [2024-11-05 10:36:28.635141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:02.716 [2024-11-05 10:36:28.635159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.716 [2024-11-05 10:36:28.635201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:02.717 [2024-11-05 10:36:28.635218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.717 [2024-11-05 10:36:28.635277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:02.717 [2024-11-05 10:36:28.635294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.717 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:02.717 #24 NEW cov: 12473 ft: 14628 corp: 11/136b lim: 25 exec/s: 0 rss: 73Mb L: 21/24 MS: 1 InsertRepeatedBytes- 00:09:02.717 [2024-11-05 10:36:28.695242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:02.717 [2024-11-05 10:36:28.695270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.717 [2024-11-05 10:36:28.695317] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:02.717 [2024-11-05 10:36:28.695334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.717 [2024-11-05 10:36:28.695377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:02.717 [2024-11-05 10:36:28.695394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.717 [2024-11-05 10:36:28.695451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:02.717 [2024-11-05 10:36:28.695467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.717 #25 NEW cov: 12473 ft: 14646 corp: 12/157b lim: 25 exec/s: 25 rss: 73Mb L: 21/24 MS: 1 ChangeByte- 00:09:02.717 [2024-11-05 10:36:28.755013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:02.717 [2024-11-05 10:36:28.755041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.717 #30 NEW cov: 12473 ft: 14696 corp: 13/163b lim: 25 exec/s: 30 rss: 73Mb L: 6/24 MS: 5 EraseBytes-InsertByte-ChangeBit-ChangeByte-CopyPart- 00:09:02.975 [2024-11-05 10:36:28.795398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:02.975 [2024-11-05 10:36:28.795427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.975 [2024-11-05 10:36:28.795486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:02.975 [2024-11-05 10:36:28.795500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.975 [2024-11-05 10:36:28.795559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:02.975 [2024-11-05 10:36:28.795578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.975 #31 NEW cov: 12473 ft: 14717 corp: 14/180b lim: 25 exec/s: 31 rss: 73Mb L: 17/24 MS: 1 ChangeByte- 00:09:02.975 [2024-11-05 10:36:28.835501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:02.975 [2024-11-05 10:36:28.835530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.975 [2024-11-05 10:36:28.835591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:02.975 [2024-11-05 10:36:28.835605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.975 [2024-11-05 10:36:28.835665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:02.975 [2024-11-05 10:36:28.835682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.975 #32 NEW cov: 12473 ft: 14761 corp: 15/198b lim: 25 exec/s: 32 rss: 74Mb L: 18/24 MS: 1 ChangeByte- 00:09:02.975 [2024-11-05 10:36:28.895659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:02.975 [2024-11-05 10:36:28.895687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.975 [2024-11-05 10:36:28.895740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:02.975 [2024-11-05 10:36:28.895757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.975 [2024-11-05 10:36:28.895799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:02.975 [2024-11-05 10:36:28.895817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.975 #33 NEW cov: 12473 ft: 14877 corp: 16/216b lim: 25 exec/s: 33 rss: 74Mb L: 18/24 MS: 1 ChangeBinInt- 00:09:02.975 [2024-11-05 10:36:28.956128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:02.975 [2024-11-05 10:36:28.956157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.975 [2024-11-05 10:36:28.956213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:02.975 [2024-11-05 10:36:28.956230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.975 [2024-11-05 10:36:28.956264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:02.975 [2024-11-05 10:36:28.956282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.976 [2024-11-05 10:36:28.956337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:02.976 [2024-11-05 10:36:28.956355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.976 [2024-11-05 10:36:28.956415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:02.976 [2024-11-05 10:36:28.956431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:02.976 #34 NEW cov: 12473 ft: 14970 corp: 17/241b lim: 25 exec/s: 34 rss: 74Mb L: 25/25 MS: 1 CrossOver- 00:09:02.976 [2024-11-05 10:36:29.016290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:02.976 [2024-11-05 10:36:29.016318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.976 [2024-11-05 10:36:29.016375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:02.976 [2024-11-05 10:36:29.016394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.976 [2024-11-05 10:36:29.016424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:02.976 [2024-11-05 10:36:29.016441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.976 [2024-11-05 10:36:29.016499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:02.976 [2024-11-05 10:36:29.016516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.976 [2024-11-05 10:36:29.016571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:02.976 [2024-11-05 10:36:29.016586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:03.234 #35 NEW cov: 12473 ft: 15048 corp: 18/266b lim: 25 exec/s: 35 rss: 74Mb L: 25/25 MS: 1 CopyPart- 00:09:03.234 [2024-11-05 10:36:29.076150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:03.234 [2024-11-05 10:36:29.076177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.234 [2024-11-05 10:36:29.076231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:03.234 [2024-11-05 10:36:29.076247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.234 [2024-11-05 10:36:29.076301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:03.234 [2024-11-05 10:36:29.076316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.234 #36 NEW cov: 12473 ft: 15062 corp: 19/283b lim: 25 exec/s: 36 rss: 74Mb L: 17/25 MS: 1 CMP- DE: "\3779q\3337\247eb"- 00:09:03.234 [2024-11-05 10:36:29.116029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:03.234 [2024-11-05 10:36:29.116055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.234 #39 NEW cov: 12473 ft: 15104 corp: 20/291b lim: 25 exec/s: 39 rss: 74Mb L: 8/25 MS: 3 EraseBytes-InsertByte-CopyPart- 00:09:03.234 [2024-11-05 10:36:29.156648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:03.234 [2024-11-05 10:36:29.156676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.234 [2024-11-05 10:36:29.156726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:03.234 [2024-11-05 10:36:29.156742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.234 [2024-11-05 10:36:29.156767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:03.234 [2024-11-05 10:36:29.156785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.234 [2024-11-05 10:36:29.156844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:03.234 [2024-11-05 10:36:29.156861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:03.234 [2024-11-05 10:36:29.156919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:03.234 [2024-11-05 10:36:29.156936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:03.234 #40 NEW cov: 12473 ft: 15168 corp: 21/316b lim: 25 exec/s: 40 rss: 74Mb L: 25/25 MS: 1 CrossOver- 00:09:03.234 [2024-11-05 10:36:29.196233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:03.234 [2024-11-05 10:36:29.196260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.234 #41 NEW cov: 12473 ft: 15191 corp: 22/324b lim: 25 exec/s: 41 rss: 74Mb L: 8/25 MS: 1 ChangeByte- 00:09:03.234 [2024-11-05 10:36:29.236647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:03.234 [2024-11-05 10:36:29.236675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.234 [2024-11-05 10:36:29.236728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:03.234 [2024-11-05 10:36:29.236746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.234 [2024-11-05 10:36:29.236801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:03.234 [2024-11-05 10:36:29.236818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.234 #42 NEW cov: 12473 ft: 15221 corp: 23/341b lim: 25 exec/s: 42 rss: 74Mb L: 17/25 MS: 1 ChangeBinInt- 00:09:03.234 [2024-11-05 10:36:29.296845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:03.234 [2024-11-05 10:36:29.296872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.234 [2024-11-05 10:36:29.296925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:03.234 [2024-11-05 10:36:29.296942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.234 [2024-11-05 10:36:29.297001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:03.234 [2024-11-05 10:36:29.297018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.493 #43 NEW cov: 12473 ft: 15269 corp: 24/359b lim: 25 exec/s: 43 rss: 74Mb L: 18/25 MS: 1 ChangeBinInt- 00:09:03.493 [2024-11-05 10:36:29.336942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:03.493 [2024-11-05 10:36:29.336970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.493 [2024-11-05 10:36:29.337024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:03.493 [2024-11-05 10:36:29.337039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.493 [2024-11-05 10:36:29.337089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:03.493 [2024-11-05 10:36:29.337105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.493 #44 NEW cov: 12473 ft: 15320 corp: 25/377b lim: 25 exec/s: 44 rss: 74Mb L: 18/25 MS: 1 ChangeByte- 00:09:03.493 [2024-11-05 10:36:29.396855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:03.493 [2024-11-05 10:36:29.396881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.493 #45 NEW cov: 12473 ft: 15327 corp: 26/385b lim: 25 exec/s: 45 rss: 74Mb L: 8/25 MS: 1 ChangeByte- 00:09:03.493 [2024-11-05 10:36:29.437203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:03.493 [2024-11-05 10:36:29.437231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.493 [2024-11-05 10:36:29.437282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:03.493 [2024-11-05 10:36:29.437299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.493 [2024-11-05 10:36:29.437347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:03.493 [2024-11-05 10:36:29.437364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.493 #46 NEW cov: 12473 ft: 15339 corp: 27/403b lim: 25 exec/s: 46 rss: 74Mb L: 18/25 MS: 1 CrossOver- 00:09:03.493 [2024-11-05 10:36:29.497460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:03.493 [2024-11-05 10:36:29.497489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.493 [2024-11-05 10:36:29.497545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:03.493 [2024-11-05 10:36:29.497558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.493 [2024-11-05 10:36:29.497615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:03.493 [2024-11-05 10:36:29.497632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.493 #47 NEW cov: 12473 ft: 15347 corp: 28/421b lim: 25 exec/s: 47 rss: 74Mb L: 18/25 MS: 1 CrossOver- 00:09:03.493 [2024-11-05 10:36:29.557591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:03.493 [2024-11-05 10:36:29.557620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.493 [2024-11-05 10:36:29.557676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:03.493 [2024-11-05 10:36:29.557691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.493 [2024-11-05 10:36:29.557740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:03.493 [2024-11-05 10:36:29.557757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.752 #48 NEW cov: 12473 ft: 15378 corp: 29/439b lim: 25 exec/s: 48 rss: 74Mb L: 18/25 MS: 1 CopyPart- 00:09:03.752 [2024-11-05 10:36:29.597988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:03.752 [2024-11-05 10:36:29.598016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.752 [2024-11-05 10:36:29.598088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:03.752 [2024-11-05 10:36:29.598105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.752 [2024-11-05 10:36:29.598160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:03.752 [2024-11-05 10:36:29.598180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.752 [2024-11-05 10:36:29.598238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:03.752 [2024-11-05 10:36:29.598255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:03.752 [2024-11-05 10:36:29.598315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:03.752 [2024-11-05 10:36:29.598332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:03.753 #49 NEW cov: 12473 ft: 15391 corp: 30/464b lim: 25 exec/s: 49 rss: 74Mb L: 25/25 MS: 1 ChangeByte- 00:09:03.753 [2024-11-05 10:36:29.657891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:03.753 [2024-11-05 10:36:29.657919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.753 [2024-11-05 10:36:29.657979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:03.753 [2024-11-05 10:36:29.657993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.753 [2024-11-05 10:36:29.658049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:03.753 [2024-11-05 10:36:29.658066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.753 #50 NEW cov: 12473 ft: 15397 corp: 31/481b lim: 25 exec/s: 50 rss: 74Mb L: 17/25 MS: 1 ChangeBit- 00:09:03.753 [2024-11-05 10:36:29.698277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:03.753 [2024-11-05 10:36:29.698304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.753 [2024-11-05 10:36:29.698360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:03.753 [2024-11-05 10:36:29.698377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.753 [2024-11-05 10:36:29.698403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:03.753 [2024-11-05 10:36:29.698419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.753 [2024-11-05 10:36:29.698475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:03.753 [2024-11-05 10:36:29.698492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:03.753 [2024-11-05 10:36:29.698551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:03.753 [2024-11-05 10:36:29.698569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:03.753 #51 NEW cov: 12473 ft: 15412 corp: 32/506b lim: 25 exec/s: 25 rss: 74Mb L: 25/25 MS: 1 ChangeByte- 00:09:03.753 #51 DONE cov: 12473 ft: 15412 corp: 32/506b lim: 25 exec/s: 25 rss: 74Mb 00:09:03.753 ###### Recommended dictionary. ###### 00:09:03.753 "\3779q\3337\247eb" # Uses: 0 00:09:03.753 ###### End of recommended dictionary. ###### 00:09:03.753 Done 51 runs in 2 second(s) 00:09:04.013 10:36:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:09:04.013 10:36:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:04.013 10:36:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:04.013 10:36:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:09:04.013 10:36:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:09:04.013 10:36:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:04.013 10:36:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:04.013 10:36:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:04.013 10:36:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:09:04.013 10:36:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:04.013 10:36:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:04.013 10:36:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:09:04.013 10:36:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:09:04.013 10:36:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:04.013 10:36:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:09:04.013 10:36:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:04.013 10:36:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:04.013 10:36:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:04.013 10:36:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:09:04.013 [2024-11-05 10:36:29.908470] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:09:04.013 [2024-11-05 10:36:29.908541] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2869292 ] 00:09:04.271 [2024-11-05 10:36:30.179521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.271 [2024-11-05 10:36:30.227994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.271 [2024-11-05 10:36:30.291940] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.271 [2024-11-05 10:36:30.308165] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:09:04.271 INFO: Running with entropic power schedule (0xFF, 100). 00:09:04.271 INFO: Seed: 367329330 00:09:04.271 INFO: Loaded 1 modules (387441 inline 8-bit counters): 387441 [0x2c3ac4c, 0x2c995bd), 00:09:04.271 INFO: Loaded 1 PC tables (387441 PCs): 387441 [0x2c995c0,0x3282cd0), 00:09:04.271 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:04.271 INFO: A corpus is not provided, starting from an empty corpus 00:09:04.271 #2 INITED exec/s: 0 rss: 66Mb 00:09:04.271 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:04.271 This may also happen if the target rejected all inputs we tried so far 00:09:04.529 [2024-11-05 10:36:30.353996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4051049678157527096 len:14393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.529 [2024-11-05 10:36:30.354027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.529 [2024-11-05 10:36:30.354084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4051049678932293688 len:14393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.529 [2024-11-05 10:36:30.354099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:04.787 NEW_FUNC[1/717]: 0x467728 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:09:04.787 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:04.787 #6 NEW cov: 12318 ft: 12315 corp: 2/55b lim: 100 exec/s: 0 rss: 73Mb L: 54/54 MS: 4 CopyPart-ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:09:04.787 [2024-11-05 10:36:30.815122] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4051049678157527096 len:14393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.787 [2024-11-05 10:36:30.815161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.787 [2024-11-05 10:36:30.815207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4051049678932293688 len:14393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.787 [2024-11-05 10:36:30.815223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:04.787 #17 NEW cov: 12431 ft: 12770 corp: 3/109b lim: 100 exec/s: 0 rss: 73Mb L: 54/54 MS: 1 ChangeByte- 00:09:05.045 [2024-11-05 10:36:30.875005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.045 [2024-11-05 10:36:30.875035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.045 #18 NEW cov: 12437 ft: 14025 corp: 4/139b lim: 100 exec/s: 0 rss: 73Mb L: 30/54 MS: 1 InsertRepeatedBytes- 00:09:05.045 [2024-11-05 10:36:30.915056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.045 [2024-11-05 10:36:30.915086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.045 #19 NEW cov: 12522 ft: 14246 corp: 5/169b lim: 100 exec/s: 0 rss: 73Mb L: 30/54 MS: 1 ChangeBinInt- 00:09:05.045 [2024-11-05 10:36:30.975801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9621242987464197509 len:34182 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.045 [2024-11-05 10:36:30.975830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.045 [2024-11-05 10:36:30.975882] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9621242987464197509 len:34182 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.045 [2024-11-05 10:36:30.975899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.045 [2024-11-05 10:36:30.975954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9621242987464197509 len:34182 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.045 [2024-11-05 10:36:30.975972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.045 [2024-11-05 10:36:30.976028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:9621242987464197509 len:34182 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.045 [2024-11-05 10:36:30.976046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:05.045 #22 NEW cov: 12522 ft: 14779 corp: 6/250b lim: 100 exec/s: 0 rss: 73Mb L: 81/81 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:09:05.045 [2024-11-05 10:36:31.015350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073701228543 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.045 [2024-11-05 10:36:31.015378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.045 #23 NEW cov: 12522 ft: 14820 corp: 7/281b lim: 100 exec/s: 0 rss: 73Mb L: 31/81 MS: 1 InsertByte- 00:09:05.045 [2024-11-05 10:36:31.055476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073701228543 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.045 [2024-11-05 10:36:31.055504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.045 #24 NEW cov: 12522 ft: 14910 corp: 8/312b lim: 100 exec/s: 0 rss: 73Mb L: 31/81 MS: 1 ShuffleBytes- 00:09:05.045 [2024-11-05 10:36:31.115655] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.045 [2024-11-05 10:36:31.115683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.303 #25 NEW cov: 12522 ft: 14947 corp: 9/342b lim: 100 exec/s: 0 rss: 73Mb L: 30/81 MS: 1 ChangeByte- 00:09:05.304 [2024-11-05 10:36:31.155965] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.304 [2024-11-05 10:36:31.155994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.304 [2024-11-05 10:36:31.156054] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.304 [2024-11-05 10:36:31.156071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.304 #26 NEW cov: 12522 ft: 15082 corp: 10/390b lim: 100 exec/s: 0 rss: 73Mb L: 48/81 MS: 1 InsertRepeatedBytes- 00:09:05.304 [2024-11-05 10:36:31.196370] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4286644223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.304 [2024-11-05 10:36:31.196400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.304 [2024-11-05 10:36:31.196448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.304 [2024-11-05 10:36:31.196465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.304 [2024-11-05 10:36:31.196506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.304 [2024-11-05 10:36:31.196523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.304 [2024-11-05 10:36:31.196580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.304 [2024-11-05 10:36:31.196597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:05.304 #27 NEW cov: 12522 ft: 15133 corp: 11/473b lim: 100 exec/s: 0 rss: 73Mb L: 83/83 MS: 1 InsertRepeatedBytes- 00:09:05.304 [2024-11-05 10:36:31.236004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:34560 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.304 [2024-11-05 10:36:31.236032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.304 NEW_FUNC[1/1]: 0x1c30d58 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:05.304 #28 NEW cov: 12545 ft: 15237 corp: 12/504b lim: 100 exec/s: 0 rss: 73Mb L: 31/83 MS: 1 InsertByte- 00:09:05.304 [2024-11-05 10:36:31.296179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:34560 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.304 [2024-11-05 10:36:31.296208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.304 #29 NEW cov: 12545 ft: 15270 corp: 13/535b lim: 100 exec/s: 0 rss: 73Mb L: 31/83 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:09:05.304 [2024-11-05 10:36:31.356531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.304 [2024-11-05 10:36:31.356558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.304 [2024-11-05 10:36:31.356623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446463672474664959 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.304 [2024-11-05 10:36:31.356641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.304 #30 NEW cov: 12545 ft: 15286 corp: 14/576b lim: 100 exec/s: 30 rss: 73Mb L: 41/83 MS: 1 CopyPart- 00:09:05.562 [2024-11-05 10:36:31.397034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4286644223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.562 [2024-11-05 10:36:31.397063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.562 [2024-11-05 10:36:31.397110] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4278190080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.562 [2024-11-05 10:36:31.397126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.562 [2024-11-05 10:36:31.397166] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.562 [2024-11-05 10:36:31.397184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.562 [2024-11-05 10:36:31.397238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744069414584320 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.562 [2024-11-05 10:36:31.397271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:05.562 #31 NEW cov: 12545 ft: 15314 corp: 15/667b lim: 100 exec/s: 31 rss: 74Mb L: 91/91 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:09:05.562 [2024-11-05 10:36:31.457218] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4286644223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.562 [2024-11-05 10:36:31.457247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.562 [2024-11-05 10:36:31.457296] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.562 [2024-11-05 10:36:31.457312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.562 [2024-11-05 10:36:31.457353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:838860800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.562 [2024-11-05 10:36:31.457371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.562 [2024-11-05 10:36:31.457427] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.562 [2024-11-05 10:36:31.457443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:05.562 #32 NEW cov: 12545 ft: 15366 corp: 16/751b lim: 100 exec/s: 32 rss: 74Mb L: 84/91 MS: 1 InsertByte- 00:09:05.562 [2024-11-05 10:36:31.496981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.563 [2024-11-05 10:36:31.497008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.563 [2024-11-05 10:36:31.497073] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9295428535676043263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.563 [2024-11-05 10:36:31.497090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.563 #33 NEW cov: 12545 ft: 15407 corp: 17/803b lim: 100 exec/s: 33 rss: 74Mb L: 52/91 MS: 1 CrossOver- 00:09:05.563 [2024-11-05 10:36:31.556962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:16640 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.563 [2024-11-05 10:36:31.556989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.563 #34 NEW cov: 12545 ft: 15430 corp: 18/834b lim: 100 exec/s: 34 rss: 74Mb L: 31/91 MS: 1 CopyPart- 00:09:05.563 [2024-11-05 10:36:31.617493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073701228416 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.563 [2024-11-05 10:36:31.617520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.563 [2024-11-05 10:36:31.617574] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.563 [2024-11-05 10:36:31.617588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.563 [2024-11-05 10:36:31.617642] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.563 [2024-11-05 10:36:31.617659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.821 #35 NEW cov: 12545 ft: 15705 corp: 19/894b lim: 100 exec/s: 35 rss: 74Mb L: 60/91 MS: 1 CopyPart- 00:09:05.821 [2024-11-05 10:36:31.677294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.821 [2024-11-05 10:36:31.677323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.821 #36 NEW cov: 12545 ft: 15782 corp: 20/925b lim: 100 exec/s: 36 rss: 74Mb L: 31/91 MS: 1 InsertByte- 00:09:05.821 [2024-11-05 10:36:31.717635] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.821 [2024-11-05 10:36:31.717663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.821 [2024-11-05 10:36:31.717726] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9295428535676043263 len:50030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.821 [2024-11-05 10:36:31.717740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.821 #37 NEW cov: 12545 ft: 15819 corp: 21/977b lim: 100 exec/s: 37 rss: 74Mb L: 52/91 MS: 1 CMP- DE: "\303m\011C\330q:\000"- 00:09:05.821 [2024-11-05 10:36:31.778199] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4286644223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.821 [2024-11-05 10:36:31.778228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.821 [2024-11-05 10:36:31.778278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.821 [2024-11-05 10:36:31.778294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.821 [2024-11-05 10:36:31.778352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:2251800652546048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.821 [2024-11-05 10:36:31.778370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.821 [2024-11-05 10:36:31.778425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.821 [2024-11-05 10:36:31.778442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:05.821 #38 NEW cov: 12545 ft: 15820 corp: 22/1061b lim: 100 exec/s: 38 rss: 74Mb L: 84/91 MS: 1 ChangeBinInt- 00:09:05.821 [2024-11-05 10:36:31.837813] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446463702539436031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.821 [2024-11-05 10:36:31.837841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.821 #39 NEW cov: 12545 ft: 15836 corp: 23/1092b lim: 100 exec/s: 39 rss: 74Mb L: 31/91 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:09:05.821 [2024-11-05 10:36:31.897982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:34560 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.821 [2024-11-05 10:36:31.898009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.080 #40 NEW cov: 12545 ft: 15843 corp: 24/1123b lim: 100 exec/s: 40 rss: 74Mb L: 31/91 MS: 1 CopyPart- 00:09:06.080 [2024-11-05 10:36:31.938102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:34560 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.080 [2024-11-05 10:36:31.938129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.080 #41 NEW cov: 12545 ft: 15889 corp: 25/1159b lim: 100 exec/s: 41 rss: 74Mb L: 36/91 MS: 1 InsertRepeatedBytes- 00:09:06.080 [2024-11-05 10:36:31.998322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073701228543 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.080 [2024-11-05 10:36:31.998350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.080 #42 NEW cov: 12545 ft: 15906 corp: 26/1190b lim: 100 exec/s: 42 rss: 74Mb L: 31/91 MS: 1 ShuffleBytes- 00:09:06.080 [2024-11-05 10:36:32.038928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9621242987464197509 len:34182 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.080 [2024-11-05 10:36:32.038956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.080 [2024-11-05 10:36:32.039005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9621242987464197509 len:34182 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.080 [2024-11-05 10:36:32.039022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.080 [2024-11-05 10:36:32.039069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9621242987464197509 len:34182 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.080 [2024-11-05 10:36:32.039086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.080 [2024-11-05 10:36:32.039144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:9621242987464197509 len:34182 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.080 [2024-11-05 10:36:32.039161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.080 #43 NEW cov: 12545 ft: 15925 corp: 27/1271b lim: 100 exec/s: 43 rss: 74Mb L: 81/91 MS: 1 ChangeBinInt- 00:09:06.080 [2024-11-05 10:36:32.098578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:34560 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.080 [2024-11-05 10:36:32.098606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.080 #44 NEW cov: 12545 ft: 15929 corp: 28/1302b lim: 100 exec/s: 44 rss: 74Mb L: 31/91 MS: 1 CopyPart- 00:09:06.080 [2024-11-05 10:36:32.138884] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.080 [2024-11-05 10:36:32.138912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.080 [2024-11-05 10:36:32.138973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9295428535676043263 len:50030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.080 [2024-11-05 10:36:32.138990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.339 #45 NEW cov: 12545 ft: 15942 corp: 29/1354b lim: 100 exec/s: 45 rss: 74Mb L: 52/91 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:09:06.339 [2024-11-05 10:36:32.199069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.339 [2024-11-05 10:36:32.199098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.339 [2024-11-05 10:36:32.199172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073692774400 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.339 [2024-11-05 10:36:32.199189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.339 #46 NEW cov: 12545 ft: 15954 corp: 30/1403b lim: 100 exec/s: 46 rss: 74Mb L: 49/91 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:09:06.339 [2024-11-05 10:36:32.259035] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.339 [2024-11-05 10:36:32.259064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.339 #47 NEW cov: 12545 ft: 15984 corp: 31/1425b lim: 100 exec/s: 47 rss: 74Mb L: 22/91 MS: 1 EraseBytes- 00:09:06.339 [2024-11-05 10:36:32.299725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4286644223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.339 [2024-11-05 10:36:32.299753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.339 [2024-11-05 10:36:32.299802] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4278190080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.339 [2024-11-05 10:36:32.299818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.339 [2024-11-05 10:36:32.299859] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.339 [2024-11-05 10:36:32.299876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.339 [2024-11-05 10:36:32.299930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744069414584320 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.339 [2024-11-05 10:36:32.299947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.339 #48 NEW cov: 12545 ft: 16002 corp: 32/1516b lim: 100 exec/s: 24 rss: 75Mb L: 91/91 MS: 1 CMP- DE: "\377\377\377\377"- 00:09:06.339 #48 DONE cov: 12545 ft: 16002 corp: 32/1516b lim: 100 exec/s: 24 rss: 75Mb 00:09:06.339 ###### Recommended dictionary. ###### 00:09:06.339 "\377\377\377\377\377\377\377\377" # Uses: 3 00:09:06.339 "\303m\011C\330q:\000" # Uses: 0 00:09:06.339 "\001\000\000\000\000\000\000\000" # Uses: 0 00:09:06.339 "\377\377\377\377" # Uses: 0 00:09:06.339 ###### End of recommended dictionary. ###### 00:09:06.339 Done 48 runs in 2 second(s) 00:09:06.598 10:36:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:09:06.598 10:36:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:06.598 10:36:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:06.598 10:36:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:09:06.598 00:09:06.598 real 1m5.707s 00:09:06.598 user 1m39.753s 00:09:06.598 sys 0m8.453s 00:09:06.598 10:36:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:06.598 10:36:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:06.598 ************************************ 00:09:06.598 END TEST nvmf_llvm_fuzz 00:09:06.598 ************************************ 00:09:06.598 10:36:32 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:09:06.598 10:36:32 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:09:06.598 10:36:32 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:09:06.598 10:36:32 llvm_fuzz -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:06.598 10:36:32 llvm_fuzz -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:06.598 10:36:32 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:06.598 ************************************ 00:09:06.598 START TEST vfio_llvm_fuzz 00:09:06.598 ************************************ 00:09:06.598 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:09:06.598 * Looking for test storage... 00:09:06.598 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:06.598 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:06.598 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:09:06.598 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:06.859 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:06.859 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.859 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.859 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.859 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.859 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.859 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.859 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.859 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.859 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.859 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.859 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.859 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:09:06.859 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:09:06.859 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.859 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.859 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:09:06.859 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:09:06.859 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.859 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:06.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.860 --rc genhtml_branch_coverage=1 00:09:06.860 --rc genhtml_function_coverage=1 00:09:06.860 --rc genhtml_legend=1 00:09:06.860 --rc geninfo_all_blocks=1 00:09:06.860 --rc geninfo_unexecuted_blocks=1 00:09:06.860 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:06.860 ' 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:06.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.860 --rc genhtml_branch_coverage=1 00:09:06.860 --rc genhtml_function_coverage=1 00:09:06.860 --rc genhtml_legend=1 00:09:06.860 --rc geninfo_all_blocks=1 00:09:06.860 --rc geninfo_unexecuted_blocks=1 00:09:06.860 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:06.860 ' 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:06.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.860 --rc genhtml_branch_coverage=1 00:09:06.860 --rc genhtml_function_coverage=1 00:09:06.860 --rc genhtml_legend=1 00:09:06.860 --rc geninfo_all_blocks=1 00:09:06.860 --rc geninfo_unexecuted_blocks=1 00:09:06.860 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:06.860 ' 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:06.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.860 --rc genhtml_branch_coverage=1 00:09:06.860 --rc genhtml_function_coverage=1 00:09:06.860 --rc genhtml_legend=1 00:09:06.860 --rc geninfo_all_blocks=1 00:09:06.860 --rc geninfo_unexecuted_blocks=1 00:09:06.860 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:06.860 ' 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:06.860 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:06.861 #define SPDK_CONFIG_H 00:09:06.861 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:06.861 #define SPDK_CONFIG_APPS 1 00:09:06.861 #define SPDK_CONFIG_ARCH native 00:09:06.861 #undef SPDK_CONFIG_ASAN 00:09:06.861 #undef SPDK_CONFIG_AVAHI 00:09:06.861 #undef SPDK_CONFIG_CET 00:09:06.861 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:06.861 #define SPDK_CONFIG_COVERAGE 1 00:09:06.861 #define SPDK_CONFIG_CROSS_PREFIX 00:09:06.861 #undef SPDK_CONFIG_CRYPTO 00:09:06.861 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:06.861 #undef SPDK_CONFIG_CUSTOMOCF 00:09:06.861 #undef SPDK_CONFIG_DAOS 00:09:06.861 #define SPDK_CONFIG_DAOS_DIR 00:09:06.861 #define SPDK_CONFIG_DEBUG 1 00:09:06.861 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:06.861 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:09:06.861 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:06.861 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:06.861 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:06.861 #undef SPDK_CONFIG_DPDK_UADK 00:09:06.861 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:09:06.861 #define SPDK_CONFIG_EXAMPLES 1 00:09:06.861 #undef SPDK_CONFIG_FC 00:09:06.861 #define SPDK_CONFIG_FC_PATH 00:09:06.861 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:06.861 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:06.861 #define SPDK_CONFIG_FSDEV 1 00:09:06.861 #undef SPDK_CONFIG_FUSE 00:09:06.861 #define SPDK_CONFIG_FUZZER 1 00:09:06.861 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:09:06.861 #undef SPDK_CONFIG_GOLANG 00:09:06.861 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:06.861 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:06.861 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:06.861 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:06.861 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:06.861 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:06.861 #undef SPDK_CONFIG_HAVE_LZ4 00:09:06.861 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:06.861 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:06.861 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:06.861 #define SPDK_CONFIG_IDXD 1 00:09:06.861 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:06.861 #undef SPDK_CONFIG_IPSEC_MB 00:09:06.861 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:06.861 #define SPDK_CONFIG_ISAL 1 00:09:06.861 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:06.861 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:06.861 #define SPDK_CONFIG_LIBDIR 00:09:06.861 #undef SPDK_CONFIG_LTO 00:09:06.861 #define SPDK_CONFIG_MAX_LCORES 128 00:09:06.861 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:06.861 #define SPDK_CONFIG_NVME_CUSE 1 00:09:06.861 #undef SPDK_CONFIG_OCF 00:09:06.861 #define SPDK_CONFIG_OCF_PATH 00:09:06.861 #define SPDK_CONFIG_OPENSSL_PATH 00:09:06.861 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:06.861 #define SPDK_CONFIG_PGO_DIR 00:09:06.861 #undef SPDK_CONFIG_PGO_USE 00:09:06.861 #define SPDK_CONFIG_PREFIX /usr/local 00:09:06.861 #undef SPDK_CONFIG_RAID5F 00:09:06.861 #undef SPDK_CONFIG_RBD 00:09:06.861 #define SPDK_CONFIG_RDMA 1 00:09:06.861 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:06.861 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:06.861 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:06.861 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:06.861 #undef SPDK_CONFIG_SHARED 00:09:06.861 #undef SPDK_CONFIG_SMA 00:09:06.861 #define SPDK_CONFIG_TESTS 1 00:09:06.861 #undef SPDK_CONFIG_TSAN 00:09:06.861 #define SPDK_CONFIG_UBLK 1 00:09:06.861 #define SPDK_CONFIG_UBSAN 1 00:09:06.861 #undef SPDK_CONFIG_UNIT_TESTS 00:09:06.861 #undef SPDK_CONFIG_URING 00:09:06.861 #define SPDK_CONFIG_URING_PATH 00:09:06.861 #undef SPDK_CONFIG_URING_ZNS 00:09:06.861 #undef SPDK_CONFIG_USDT 00:09:06.861 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:06.861 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:06.861 #define SPDK_CONFIG_VFIO_USER 1 00:09:06.861 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:06.861 #define SPDK_CONFIG_VHOST 1 00:09:06.861 #define SPDK_CONFIG_VIRTIO 1 00:09:06.861 #undef SPDK_CONFIG_VTUNE 00:09:06.861 #define SPDK_CONFIG_VTUNE_DIR 00:09:06.861 #define SPDK_CONFIG_WERROR 1 00:09:06.861 #define SPDK_CONFIG_WPDK_DIR 00:09:06.861 #undef SPDK_CONFIG_XNVME 00:09:06.861 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.861 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:06.862 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:09:06.863 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 2869682 ]] 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 2869682 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.3yg8M3 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.3yg8M3/tests/vfio /tmp/spdk.3yg8M3 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=81439744000 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=94500290560 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=13060546560 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47245381632 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250145280 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4763648 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=18893955072 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=18900058112 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=6103040 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=46175846400 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250145280 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=1074298880 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=9450016768 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=9450029056 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:09:06.864 * Looking for test storage... 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:06.864 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=81439744000 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=15275139072 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:07.123 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1683 -- # true 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:09:07.123 10:36:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:07.123 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:07.123 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:07.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.124 --rc genhtml_branch_coverage=1 00:09:07.124 --rc genhtml_function_coverage=1 00:09:07.124 --rc genhtml_legend=1 00:09:07.124 --rc geninfo_all_blocks=1 00:09:07.124 --rc geninfo_unexecuted_blocks=1 00:09:07.124 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:07.124 ' 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:07.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.124 --rc genhtml_branch_coverage=1 00:09:07.124 --rc genhtml_function_coverage=1 00:09:07.124 --rc genhtml_legend=1 00:09:07.124 --rc geninfo_all_blocks=1 00:09:07.124 --rc geninfo_unexecuted_blocks=1 00:09:07.124 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:07.124 ' 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:07.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.124 --rc genhtml_branch_coverage=1 00:09:07.124 --rc genhtml_function_coverage=1 00:09:07.124 --rc genhtml_legend=1 00:09:07.124 --rc geninfo_all_blocks=1 00:09:07.124 --rc geninfo_unexecuted_blocks=1 00:09:07.124 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:07.124 ' 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:07.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.124 --rc genhtml_branch_coverage=1 00:09:07.124 --rc genhtml_function_coverage=1 00:09:07.124 --rc genhtml_legend=1 00:09:07.124 --rc geninfo_all_blocks=1 00:09:07.124 --rc geninfo_unexecuted_blocks=1 00:09:07.124 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:07.124 ' 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:09:07.124 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:07.124 10:36:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:09:07.124 [2024-11-05 10:36:33.088996] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:09:07.124 [2024-11-05 10:36:33.089078] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2869741 ] 00:09:07.382 [2024-11-05 10:36:33.233745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.382 [2024-11-05 10:36:33.292112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.640 INFO: Running with entropic power schedule (0xFF, 100). 00:09:07.640 INFO: Seed: 3551335185 00:09:07.640 INFO: Loaded 1 modules (384677 inline 8-bit counters): 384677 [0x2bfc44c, 0x2c5a2f1), 00:09:07.640 INFO: Loaded 1 PC tables (384677 PCs): 384677 [0x2c5a2f8,0x3238d48), 00:09:07.640 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:07.640 INFO: A corpus is not provided, starting from an empty corpus 00:09:07.640 #2 INITED exec/s: 0 rss: 67Mb 00:09:07.640 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:07.640 This may also happen if the target rejected all inputs we tried so far 00:09:07.640 [2024-11-05 10:36:33.579384] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:09:08.156 NEW_FUNC[1/672]: 0x43b5e8 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:09:08.156 NEW_FUNC[2/672]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:08.156 #22 NEW cov: 11159 ft: 11110 corp: 2/7b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 5 CopyPart-ShuffleBytes-ChangeByte-ChangeBinInt-InsertRepeatedBytes- 00:09:08.156 #24 NEW cov: 11187 ft: 14585 corp: 3/13b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 2 ChangeBit-InsertRepeatedBytes- 00:09:08.414 NEW_FUNC[1/1]: 0x1bfd1a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:08.414 #35 NEW cov: 11204 ft: 15760 corp: 4/19b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 1 CopyPart- 00:09:08.672 #41 NEW cov: 11204 ft: 15975 corp: 5/25b lim: 6 exec/s: 41 rss: 76Mb L: 6/6 MS: 1 ChangeBit- 00:09:08.672 #47 NEW cov: 11207 ft: 16834 corp: 6/31b lim: 6 exec/s: 47 rss: 77Mb L: 6/6 MS: 1 CrossOver- 00:09:08.930 #48 NEW cov: 11207 ft: 17282 corp: 7/37b lim: 6 exec/s: 48 rss: 77Mb L: 6/6 MS: 1 CopyPart- 00:09:09.189 #49 NEW cov: 11207 ft: 17763 corp: 8/43b lim: 6 exec/s: 49 rss: 77Mb L: 6/6 MS: 1 ChangeBinInt- 00:09:09.447 #50 NEW cov: 11207 ft: 18004 corp: 9/49b lim: 6 exec/s: 50 rss: 77Mb L: 6/6 MS: 1 ShuffleBytes- 00:09:09.447 #51 NEW cov: 11214 ft: 18067 corp: 10/55b lim: 6 exec/s: 51 rss: 77Mb L: 6/6 MS: 1 CrossOver- 00:09:09.705 #52 NEW cov: 11214 ft: 18446 corp: 11/61b lim: 6 exec/s: 26 rss: 77Mb L: 6/6 MS: 1 ChangeBinInt- 00:09:09.705 #52 DONE cov: 11214 ft: 18446 corp: 11/61b lim: 6 exec/s: 26 rss: 77Mb 00:09:09.705 Done 52 runs in 2 second(s) 00:09:09.705 [2024-11-05 10:36:35.656987] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:09:09.964 10:36:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:09:09.964 10:36:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:09.964 10:36:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:09.964 10:36:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:09:09.964 10:36:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:09:09.964 10:36:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:09.964 10:36:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:09.964 10:36:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:09.964 10:36:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:09:09.964 10:36:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:09:09.964 10:36:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:09:09.964 10:36:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:09:09.964 10:36:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:09.964 10:36:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:09.964 10:36:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:09.964 10:36:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:09:09.964 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:09.964 10:36:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:09.964 10:36:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:09.964 10:36:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:09:09.964 [2024-11-05 10:36:35.985794] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:09:09.964 [2024-11-05 10:36:35.985870] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2870104 ] 00:09:10.224 [2024-11-05 10:36:36.129408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.224 [2024-11-05 10:36:36.183349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.483 INFO: Running with entropic power schedule (0xFF, 100). 00:09:10.483 INFO: Seed: 2140379136 00:09:10.483 INFO: Loaded 1 modules (384677 inline 8-bit counters): 384677 [0x2bfc44c, 0x2c5a2f1), 00:09:10.483 INFO: Loaded 1 PC tables (384677 PCs): 384677 [0x2c5a2f8,0x3238d48), 00:09:10.483 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:10.483 INFO: A corpus is not provided, starting from an empty corpus 00:09:10.483 #2 INITED exec/s: 0 rss: 67Mb 00:09:10.483 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:10.483 This may also happen if the target rejected all inputs we tried so far 00:09:10.483 [2024-11-05 10:36:36.455940] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:09:10.741 [2024-11-05 10:36:36.609316] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:10.741 [2024-11-05 10:36:36.609352] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:10.741 [2024-11-05 10:36:36.609385] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:10.999 NEW_FUNC[1/673]: 0x43bb88 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:09:10.999 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:10.999 #117 NEW cov: 11110 ft: 11104 corp: 2/5b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 5 CrossOver-InsertByte-CrossOver-EraseBytes-CopyPart- 00:09:10.999 [2024-11-05 10:36:37.058294] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:10.999 [2024-11-05 10:36:37.058341] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:10.999 [2024-11-05 10:36:37.058428] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:11.257 NEW_FUNC[1/1]: 0x20fc578 in spdk_ioviter_nextv /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/util/iov.c:91 00:09:11.257 #118 NEW cov: 11180 ft: 14752 corp: 3/9b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 ChangeBinInt- 00:09:11.257 [2024-11-05 10:36:37.244082] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:11.257 [2024-11-05 10:36:37.244116] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:11.257 [2024-11-05 10:36:37.244146] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:11.515 NEW_FUNC[1/1]: 0x1bfd1a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:11.515 #131 NEW cov: 11197 ft: 15218 corp: 4/13b lim: 4 exec/s: 0 rss: 76Mb L: 4/4 MS: 3 EraseBytes-ChangeBit-CrossOver- 00:09:11.515 [2024-11-05 10:36:37.418745] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:11.515 [2024-11-05 10:36:37.418776] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:11.515 [2024-11-05 10:36:37.418800] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:11.515 #137 NEW cov: 11197 ft: 16140 corp: 5/17b lim: 4 exec/s: 137 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:09:11.773 [2024-11-05 10:36:37.594532] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:11.773 [2024-11-05 10:36:37.594563] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:11.773 [2024-11-05 10:36:37.594586] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:11.773 #138 NEW cov: 11197 ft: 17011 corp: 6/21b lim: 4 exec/s: 138 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:09:11.773 [2024-11-05 10:36:37.770971] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:11.773 [2024-11-05 10:36:37.771002] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:11.773 [2024-11-05 10:36:37.771026] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:12.032 #139 NEW cov: 11197 ft: 17405 corp: 7/25b lim: 4 exec/s: 139 rss: 77Mb L: 4/4 MS: 1 CrossOver- 00:09:12.032 [2024-11-05 10:36:37.957869] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:12.032 [2024-11-05 10:36:37.957899] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:12.032 [2024-11-05 10:36:37.957923] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:12.032 #145 NEW cov: 11197 ft: 17509 corp: 8/29b lim: 4 exec/s: 145 rss: 77Mb L: 4/4 MS: 1 ChangeBit- 00:09:12.290 [2024-11-05 10:36:38.134113] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:12.290 [2024-11-05 10:36:38.134141] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:12.290 [2024-11-05 10:36:38.134165] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:12.290 #146 NEW cov: 11204 ft: 17965 corp: 9/33b lim: 4 exec/s: 146 rss: 77Mb L: 4/4 MS: 1 ChangeByte- 00:09:12.290 [2024-11-05 10:36:38.320576] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:12.290 [2024-11-05 10:36:38.320605] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:12.290 [2024-11-05 10:36:38.320628] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:12.548 #147 NEW cov: 11204 ft: 18011 corp: 10/37b lim: 4 exec/s: 73 rss: 77Mb L: 4/4 MS: 1 CrossOver- 00:09:12.548 #147 DONE cov: 11204 ft: 18011 corp: 10/37b lim: 4 exec/s: 73 rss: 77Mb 00:09:12.548 Done 147 runs in 2 second(s) 00:09:12.548 [2024-11-05 10:36:38.445974] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:09:12.807 10:36:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:09:12.807 10:36:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:12.807 10:36:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:12.807 10:36:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:09:12.807 10:36:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:09:12.807 10:36:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:12.807 10:36:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:12.807 10:36:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:12.807 10:36:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:09:12.807 10:36:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:09:12.807 10:36:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:09:12.807 10:36:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:09:12.807 10:36:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:12.807 10:36:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:12.807 10:36:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:12.807 10:36:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:09:12.807 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:12.807 10:36:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:12.807 10:36:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:12.807 10:36:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:09:12.807 [2024-11-05 10:36:38.769471] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:09:12.807 [2024-11-05 10:36:38.769547] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2870466 ] 00:09:13.066 [2024-11-05 10:36:38.913560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.066 [2024-11-05 10:36:38.967560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.325 INFO: Running with entropic power schedule (0xFF, 100). 00:09:13.325 INFO: Seed: 625410607 00:09:13.325 INFO: Loaded 1 modules (384677 inline 8-bit counters): 384677 [0x2bfc44c, 0x2c5a2f1), 00:09:13.325 INFO: Loaded 1 PC tables (384677 PCs): 384677 [0x2c5a2f8,0x3238d48), 00:09:13.325 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:13.325 INFO: A corpus is not provided, starting from an empty corpus 00:09:13.325 #2 INITED exec/s: 0 rss: 68Mb 00:09:13.325 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:13.325 This may also happen if the target rejected all inputs we tried so far 00:09:13.325 [2024-11-05 10:36:39.249703] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:09:13.325 [2024-11-05 10:36:39.292476] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:13.843 NEW_FUNC[1/673]: 0x43c578 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:09:13.843 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:13.843 #5 NEW cov: 11143 ft: 11104 corp: 2/9b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 3 ShuffleBytes-CrossOver-InsertRepeatedBytes- 00:09:13.843 [2024-11-05 10:36:39.756943] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:13.843 #11 NEW cov: 11160 ft: 14621 corp: 3/17b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:09:14.102 [2024-11-05 10:36:39.926359] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:14.102 NEW_FUNC[1/1]: 0x1bfd1a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:14.102 #12 NEW cov: 11180 ft: 15458 corp: 4/25b lim: 8 exec/s: 0 rss: 76Mb L: 8/8 MS: 1 ChangeBinInt- 00:09:14.102 [2024-11-05 10:36:40.095142] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:14.360 #18 NEW cov: 11180 ft: 15830 corp: 5/33b lim: 8 exec/s: 0 rss: 76Mb L: 8/8 MS: 1 ChangeBinInt- 00:09:14.360 [2024-11-05 10:36:40.264527] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:14.360 #34 NEW cov: 11180 ft: 15892 corp: 6/41b lim: 8 exec/s: 34 rss: 76Mb L: 8/8 MS: 1 CopyPart- 00:09:14.360 [2024-11-05 10:36:40.424029] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:14.619 #35 NEW cov: 11180 ft: 16053 corp: 7/49b lim: 8 exec/s: 35 rss: 76Mb L: 8/8 MS: 1 CopyPart- 00:09:14.619 [2024-11-05 10:36:40.585028] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:14.619 #36 NEW cov: 11180 ft: 16222 corp: 8/57b lim: 8 exec/s: 36 rss: 76Mb L: 8/8 MS: 1 ChangeBit- 00:09:14.878 [2024-11-05 10:36:40.745753] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:14.878 #39 NEW cov: 11180 ft: 16336 corp: 9/65b lim: 8 exec/s: 39 rss: 76Mb L: 8/8 MS: 3 EraseBytes-CrossOver-InsertByte- 00:09:14.878 [2024-11-05 10:36:40.905865] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:15.137 #40 NEW cov: 11187 ft: 16539 corp: 10/73b lim: 8 exec/s: 40 rss: 76Mb L: 8/8 MS: 1 ChangeBit- 00:09:15.137 [2024-11-05 10:36:41.064345] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:15.137 #46 NEW cov: 11187 ft: 17182 corp: 11/81b lim: 8 exec/s: 46 rss: 76Mb L: 8/8 MS: 1 ChangeBit- 00:09:15.396 [2024-11-05 10:36:41.221974] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:15.396 #52 NEW cov: 11187 ft: 17220 corp: 12/89b lim: 8 exec/s: 26 rss: 76Mb L: 8/8 MS: 1 CrossOver- 00:09:15.396 #52 DONE cov: 11187 ft: 17220 corp: 12/89b lim: 8 exec/s: 26 rss: 76Mb 00:09:15.396 Done 52 runs in 2 second(s) 00:09:15.396 [2024-11-05 10:36:41.335988] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:09:15.655 10:36:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:09:15.655 10:36:41 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:15.655 10:36:41 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:15.655 10:36:41 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:09:15.655 10:36:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:09:15.655 10:36:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:15.655 10:36:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:15.655 10:36:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:09:15.655 10:36:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:09:15.655 10:36:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:09:15.655 10:36:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:09:15.655 10:36:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:09:15.655 10:36:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:15.655 10:36:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:15.655 10:36:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:09:15.655 10:36:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:09:15.655 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:15.655 10:36:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:15.655 10:36:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:15.655 10:36:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:09:15.655 [2024-11-05 10:36:41.654509] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:09:15.655 [2024-11-05 10:36:41.654587] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2870821 ] 00:09:15.915 [2024-11-05 10:36:41.797134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.915 [2024-11-05 10:36:41.852908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.174 INFO: Running with entropic power schedule (0xFF, 100). 00:09:16.174 INFO: Seed: 3512406341 00:09:16.174 INFO: Loaded 1 modules (384677 inline 8-bit counters): 384677 [0x2bfc44c, 0x2c5a2f1), 00:09:16.174 INFO: Loaded 1 PC tables (384677 PCs): 384677 [0x2c5a2f8,0x3238d48), 00:09:16.174 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:09:16.174 INFO: A corpus is not provided, starting from an empty corpus 00:09:16.174 #2 INITED exec/s: 0 rss: 66Mb 00:09:16.174 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:16.174 This may also happen if the target rejected all inputs we tried so far 00:09:16.174 [2024-11-05 10:36:42.120738] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:09:16.692 NEW_FUNC[1/673]: 0x43cc68 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:09:16.692 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:16.692 #259 NEW cov: 11157 ft: 11112 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 2 ChangeByte-InsertRepeatedBytes- 00:09:16.692 #270 NEW cov: 11171 ft: 14804 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 CrossOver- 00:09:16.951 NEW_FUNC[1/1]: 0x1bfd1a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:16.951 #286 NEW cov: 11188 ft: 16077 corp: 4/97b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:09:17.210 #292 NEW cov: 11188 ft: 16541 corp: 5/129b lim: 32 exec/s: 292 rss: 75Mb L: 32/32 MS: 1 CMP- DE: "\001\372"- 00:09:17.210 #298 NEW cov: 11188 ft: 17247 corp: 6/161b lim: 32 exec/s: 298 rss: 75Mb L: 32/32 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:09:17.469 #304 NEW cov: 11188 ft: 17424 corp: 7/193b lim: 32 exec/s: 304 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:09:17.727 #309 NEW cov: 11188 ft: 17972 corp: 8/225b lim: 32 exec/s: 309 rss: 75Mb L: 32/32 MS: 5 EraseBytes-InsertByte-ChangeByte-InsertByte-InsertRepeatedBytes- 00:09:17.727 #315 NEW cov: 11188 ft: 18027 corp: 9/257b lim: 32 exec/s: 315 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:09:17.987 #320 NEW cov: 11195 ft: 18329 corp: 10/289b lim: 32 exec/s: 320 rss: 75Mb L: 32/32 MS: 5 EraseBytes-ShuffleBytes-InsertRepeatedBytes-CMP-InsertByte- DE: "_\346\364l\336q:\000"- 00:09:18.246 #321 NEW cov: 11195 ft: 18484 corp: 11/321b lim: 32 exec/s: 160 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:09:18.246 #321 DONE cov: 11195 ft: 18484 corp: 11/321b lim: 32 exec/s: 160 rss: 76Mb 00:09:18.246 ###### Recommended dictionary. ###### 00:09:18.246 "\001\372" # Uses: 3 00:09:18.246 "\000\000\000\000\000\000\000\000" # Uses: 1 00:09:18.246 "_\346\364l\336q:\000" # Uses: 0 00:09:18.246 ###### End of recommended dictionary. ###### 00:09:18.246 Done 321 runs in 2 second(s) 00:09:18.246 [2024-11-05 10:36:44.131971] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:09:18.504 10:36:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:09:18.504 10:36:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:18.504 10:36:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:18.504 10:36:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:09:18.504 10:36:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:09:18.504 10:36:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:18.504 10:36:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:18.504 10:36:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:09:18.504 10:36:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:09:18.504 10:36:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:09:18.504 10:36:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:09:18.504 10:36:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:09:18.504 10:36:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:18.504 10:36:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:18.504 10:36:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:09:18.504 10:36:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:09:18.504 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:18.504 10:36:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:18.504 10:36:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:18.505 10:36:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:09:18.505 [2024-11-05 10:36:44.446786] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:09:18.505 [2024-11-05 10:36:44.446861] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2871249 ] 00:09:18.763 [2024-11-05 10:36:44.586960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.763 [2024-11-05 10:36:44.641167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.022 INFO: Running with entropic power schedule (0xFF, 100). 00:09:19.022 INFO: Seed: 2005435304 00:09:19.022 INFO: Loaded 1 modules (384677 inline 8-bit counters): 384677 [0x2bfc44c, 0x2c5a2f1), 00:09:19.022 INFO: Loaded 1 PC tables (384677 PCs): 384677 [0x2c5a2f8,0x3238d48), 00:09:19.022 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:09:19.022 INFO: A corpus is not provided, starting from an empty corpus 00:09:19.022 #2 INITED exec/s: 0 rss: 67Mb 00:09:19.023 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:19.023 This may also happen if the target rejected all inputs we tried so far 00:09:19.023 [2024-11-05 10:36:44.907864] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:09:19.281 NEW_FUNC[1/673]: 0x43d4e8 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:09:19.281 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:19.281 #44 NEW cov: 11158 ft: 10830 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 2 ChangeBit-InsertRepeatedBytes- 00:09:19.540 #60 NEW cov: 11172 ft: 13899 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:09:19.799 NEW_FUNC[1/1]: 0x1bfd1a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:19.799 #66 NEW cov: 11189 ft: 15970 corp: 4/97b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:09:19.799 #67 NEW cov: 11189 ft: 16603 corp: 5/129b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 CMP- DE: "\001\372"- 00:09:20.057 #68 NEW cov: 11189 ft: 16736 corp: 6/161b lim: 32 exec/s: 68 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:09:20.316 #69 NEW cov: 11189 ft: 17486 corp: 7/193b lim: 32 exec/s: 69 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:09:20.316 #70 NEW cov: 11189 ft: 17572 corp: 8/225b lim: 32 exec/s: 70 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:09:20.576 #71 NEW cov: 11189 ft: 17617 corp: 9/257b lim: 32 exec/s: 71 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:09:20.835 #77 NEW cov: 11196 ft: 17722 corp: 10/289b lim: 32 exec/s: 77 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:09:20.835 #78 NEW cov: 11196 ft: 17763 corp: 11/321b lim: 32 exec/s: 39 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:09:20.835 #78 DONE cov: 11196 ft: 17763 corp: 11/321b lim: 32 exec/s: 39 rss: 77Mb 00:09:20.835 ###### Recommended dictionary. ###### 00:09:20.835 "\001\372" # Uses: 0 00:09:20.835 ###### End of recommended dictionary. ###### 00:09:20.835 Done 78 runs in 2 second(s) 00:09:21.093 [2024-11-05 10:36:46.928987] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:09:21.352 10:36:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:09:21.352 10:36:47 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:21.352 10:36:47 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:21.352 10:36:47 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:09:21.352 10:36:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:09:21.352 10:36:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:21.352 10:36:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:21.352 10:36:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:21.352 10:36:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:09:21.352 10:36:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:09:21.352 10:36:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:09:21.352 10:36:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:09:21.352 10:36:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:21.352 10:36:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:21.352 10:36:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:21.352 10:36:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:09:21.352 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:21.352 10:36:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:21.352 10:36:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:21.352 10:36:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:09:21.352 [2024-11-05 10:36:47.241170] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:09:21.352 [2024-11-05 10:36:47.241244] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2871685 ] 00:09:21.352 [2024-11-05 10:36:47.384057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.611 [2024-11-05 10:36:47.439112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.611 INFO: Running with entropic power schedule (0xFF, 100). 00:09:21.611 INFO: Seed: 510477533 00:09:21.611 INFO: Loaded 1 modules (384677 inline 8-bit counters): 384677 [0x2bfc44c, 0x2c5a2f1), 00:09:21.611 INFO: Loaded 1 PC tables (384677 PCs): 384677 [0x2c5a2f8,0x3238d48), 00:09:21.611 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:21.611 INFO: A corpus is not provided, starting from an empty corpus 00:09:21.611 #2 INITED exec/s: 0 rss: 68Mb 00:09:21.611 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:21.611 This may also happen if the target rejected all inputs we tried so far 00:09:21.870 [2024-11-05 10:36:47.714107] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:09:21.870 [2024-11-05 10:36:47.757787] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:21.870 [2024-11-05 10:36:47.757842] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:22.129 NEW_FUNC[1/674]: 0x43dee8 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:09:22.129 NEW_FUNC[2/674]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:22.129 #41 NEW cov: 11164 ft: 11112 corp: 2/14b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 4 CrossOver-InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:09:22.388 [2024-11-05 10:36:48.221695] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:22.388 [2024-11-05 10:36:48.221772] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:22.388 #52 NEW cov: 11178 ft: 14891 corp: 3/27b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 CopyPart- 00:09:22.388 [2024-11-05 10:36:48.409415] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:22.388 [2024-11-05 10:36:48.409460] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:22.647 NEW_FUNC[1/1]: 0x1bfd1a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:22.647 #53 NEW cov: 11195 ft: 15391 corp: 4/40b lim: 13 exec/s: 0 rss: 77Mb L: 13/13 MS: 1 CrossOver- 00:09:22.647 [2024-11-05 10:36:48.596945] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:22.647 [2024-11-05 10:36:48.596991] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:22.647 #54 NEW cov: 11195 ft: 15587 corp: 5/53b lim: 13 exec/s: 54 rss: 77Mb L: 13/13 MS: 1 CopyPart- 00:09:22.906 [2024-11-05 10:36:48.773146] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:22.906 [2024-11-05 10:36:48.773189] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:22.906 #65 NEW cov: 11195 ft: 16066 corp: 6/66b lim: 13 exec/s: 65 rss: 77Mb L: 13/13 MS: 1 ChangeByte- 00:09:22.906 [2024-11-05 10:36:48.949584] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:22.906 [2024-11-05 10:36:48.949622] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:23.165 #66 NEW cov: 11195 ft: 16568 corp: 7/79b lim: 13 exec/s: 66 rss: 77Mb L: 13/13 MS: 1 ChangeBit- 00:09:23.165 [2024-11-05 10:36:49.125996] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:23.165 [2024-11-05 10:36:49.126034] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:23.165 #67 NEW cov: 11195 ft: 17010 corp: 8/92b lim: 13 exec/s: 67 rss: 77Mb L: 13/13 MS: 1 ChangeBit- 00:09:23.425 [2024-11-05 10:36:49.302532] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:23.425 [2024-11-05 10:36:49.302571] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:23.425 #68 NEW cov: 11195 ft: 17070 corp: 9/105b lim: 13 exec/s: 68 rss: 77Mb L: 13/13 MS: 1 ChangeBit- 00:09:23.425 [2024-11-05 10:36:49.478920] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:23.425 [2024-11-05 10:36:49.478958] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:23.689 #74 NEW cov: 11202 ft: 17176 corp: 10/118b lim: 13 exec/s: 74 rss: 77Mb L: 13/13 MS: 1 ChangeBit- 00:09:23.689 [2024-11-05 10:36:49.655791] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:23.689 [2024-11-05 10:36:49.655831] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:23.950 #80 NEW cov: 11202 ft: 17522 corp: 11/131b lim: 13 exec/s: 40 rss: 77Mb L: 13/13 MS: 1 ChangeBit- 00:09:23.950 #80 DONE cov: 11202 ft: 17522 corp: 11/131b lim: 13 exec/s: 40 rss: 77Mb 00:09:23.950 Done 80 runs in 2 second(s) 00:09:23.950 [2024-11-05 10:36:49.789964] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:09:24.209 10:36:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:09:24.209 10:36:50 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:24.209 10:36:50 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:24.209 10:36:50 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:09:24.209 10:36:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:09:24.209 10:36:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:24.209 10:36:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:24.209 10:36:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:09:24.209 10:36:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:09:24.209 10:36:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:09:24.209 10:36:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:09:24.209 10:36:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:09:24.209 10:36:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:24.209 10:36:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:24.209 10:36:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:09:24.209 10:36:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:09:24.209 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:24.209 10:36:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:24.209 10:36:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:24.209 10:36:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:09:24.209 [2024-11-05 10:36:50.087997] Starting SPDK v25.01-pre git sha1 2f35f3599 / DPDK 24.03.0 initialization... 00:09:24.209 [2024-11-05 10:36:50.088081] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2872065 ] 00:09:24.209 [2024-11-05 10:36:50.232119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.209 [2024-11-05 10:36:50.287890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.468 INFO: Running with entropic power schedule (0xFF, 100). 00:09:24.468 INFO: Seed: 3364482443 00:09:24.468 INFO: Loaded 1 modules (384677 inline 8-bit counters): 384677 [0x2bfc44c, 0x2c5a2f1), 00:09:24.468 INFO: Loaded 1 PC tables (384677 PCs): 384677 [0x2c5a2f8,0x3238d48), 00:09:24.468 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:09:24.468 INFO: A corpus is not provided, starting from an empty corpus 00:09:24.468 #2 INITED exec/s: 0 rss: 67Mb 00:09:24.468 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:24.468 This may also happen if the target rejected all inputs we tried so far 00:09:24.727 [2024-11-05 10:36:50.571225] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:09:24.727 [2024-11-05 10:36:50.614790] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:24.727 [2024-11-05 10:36:50.614896] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:24.986 NEW_FUNC[1/674]: 0x43ebd8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:09:24.986 NEW_FUNC[2/674]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:24.986 #13 NEW cov: 11160 ft: 11106 corp: 2/10b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:09:25.245 [2024-11-05 10:36:51.078414] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:25.245 [2024-11-05 10:36:51.078473] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:25.245 #19 NEW cov: 11174 ft: 14058 corp: 3/19b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:09:25.245 [2024-11-05 10:36:51.262736] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:25.245 [2024-11-05 10:36:51.262780] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:25.504 NEW_FUNC[1/1]: 0x1bfd1a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:25.504 #21 NEW cov: 11191 ft: 15141 corp: 4/28b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 2 ChangeByte-InsertRepeatedBytes- 00:09:25.504 [2024-11-05 10:36:51.446251] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:25.504 [2024-11-05 10:36:51.446295] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:25.504 #22 NEW cov: 11191 ft: 15460 corp: 5/37b lim: 9 exec/s: 22 rss: 75Mb L: 9/9 MS: 1 CopyPart- 00:09:25.763 [2024-11-05 10:36:51.618247] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:25.763 [2024-11-05 10:36:51.618286] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:25.763 #23 NEW cov: 11191 ft: 16628 corp: 6/46b lim: 9 exec/s: 23 rss: 75Mb L: 9/9 MS: 1 CrossOver- 00:09:25.763 [2024-11-05 10:36:51.788402] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:25.763 [2024-11-05 10:36:51.788441] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:26.022 #28 NEW cov: 11191 ft: 17034 corp: 7/55b lim: 9 exec/s: 28 rss: 76Mb L: 9/9 MS: 5 ShuffleBytes-InsertByte-InsertRepeatedBytes-EraseBytes-CrossOver- 00:09:26.022 [2024-11-05 10:36:51.961185] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:26.022 [2024-11-05 10:36:51.961223] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:26.022 #34 NEW cov: 11191 ft: 17269 corp: 8/64b lim: 9 exec/s: 34 rss: 76Mb L: 9/9 MS: 1 ShuffleBytes- 00:09:26.280 [2024-11-05 10:36:52.133202] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:26.280 [2024-11-05 10:36:52.133242] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:26.280 #35 NEW cov: 11191 ft: 17738 corp: 9/73b lim: 9 exec/s: 35 rss: 76Mb L: 9/9 MS: 1 CopyPart- 00:09:26.280 [2024-11-05 10:36:52.303927] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:26.280 [2024-11-05 10:36:52.303966] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:26.539 #36 NEW cov: 11198 ft: 18057 corp: 10/82b lim: 9 exec/s: 36 rss: 76Mb L: 9/9 MS: 1 CopyPart- 00:09:26.539 [2024-11-05 10:36:52.478863] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:26.539 [2024-11-05 10:36:52.478904] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:26.539 #37 NEW cov: 11198 ft: 18308 corp: 11/91b lim: 9 exec/s: 18 rss: 76Mb L: 9/9 MS: 1 ShuffleBytes- 00:09:26.539 #37 DONE cov: 11198 ft: 18308 corp: 11/91b lim: 9 exec/s: 18 rss: 76Mb 00:09:26.539 ###### Recommended dictionary. ###### 00:09:26.539 "\001\000\000\000\000\000\000\000" # Uses: 1 00:09:26.539 ###### End of recommended dictionary. ###### 00:09:26.539 Done 37 runs in 2 second(s) 00:09:26.539 [2024-11-05 10:36:52.608987] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:09:26.891 10:36:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:09:26.891 10:36:52 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:26.891 10:36:52 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:26.891 10:36:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:09:26.891 00:09:26.891 real 0m20.312s 00:09:26.891 user 0m27.518s 00:09:26.891 sys 0m2.335s 00:09:26.891 10:36:52 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:26.891 10:36:52 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:26.891 ************************************ 00:09:26.891 END TEST vfio_llvm_fuzz 00:09:26.891 ************************************ 00:09:26.891 00:09:26.891 real 1m26.400s 00:09:26.891 user 2m7.454s 00:09:26.891 sys 0m11.018s 00:09:26.891 10:36:52 llvm_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:26.891 10:36:52 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:26.891 ************************************ 00:09:26.891 END TEST llvm_fuzz 00:09:26.891 ************************************ 00:09:26.891 10:36:52 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:09:26.891 10:36:52 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:09:26.891 10:36:52 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:09:26.891 10:36:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:26.891 10:36:52 -- common/autotest_common.sh@10 -- # set +x 00:09:27.198 10:36:52 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:09:27.198 10:36:52 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:09:27.198 10:36:52 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:09:27.198 10:36:52 -- common/autotest_common.sh@10 -- # set +x 00:09:32.461 INFO: APP EXITING 00:09:32.461 INFO: killing all VMs 00:09:32.461 INFO: killing vhost app 00:09:32.461 WARN: no vhost pid file found 00:09:32.461 INFO: EXIT DONE 00:09:35.745 Waiting for block devices as requested 00:09:35.745 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:09:35.745 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:35.745 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:35.745 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:35.745 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:36.004 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:36.004 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:36.004 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:36.264 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:36.264 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:36.264 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:36.523 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:36.523 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:36.523 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:36.782 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:36.782 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:36.782 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:43.379 Cleaning 00:09:43.379 Removing: /dev/shm/spdk_tgt_trace.pid2848737 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2846268 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2847388 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2848737 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2849184 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2850004 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2850030 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2850940 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2850946 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2851297 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2851636 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2851932 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2852173 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2852429 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2852626 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2852824 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2853055 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2853642 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2856494 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2856707 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2856910 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2856981 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2857485 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2857504 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2857970 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2858057 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2858270 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2858446 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2858648 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2858800 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2859113 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2859311 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2859509 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2859752 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2860323 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2860685 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2861052 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2861413 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2861767 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2862126 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2862490 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2862849 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2863203 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2863562 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2863920 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2864271 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2864578 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2864965 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2865334 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2866084 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2866432 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2866788 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2867146 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2867501 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2867868 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2868217 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2868580 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2868931 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2869292 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2869741 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2870104 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2870466 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2870821 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2871249 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2871685 00:09:43.379 Removing: /var/run/dpdk/spdk_pid2872065 00:09:43.379 Clean 00:09:43.379 10:37:09 -- common/autotest_common.sh@1451 -- # return 0 00:09:43.379 10:37:09 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:09:43.379 10:37:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:43.379 10:37:09 -- common/autotest_common.sh@10 -- # set +x 00:09:43.379 10:37:09 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:09:43.379 10:37:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:43.379 10:37:09 -- common/autotest_common.sh@10 -- # set +x 00:09:43.379 10:37:09 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:43.379 10:37:09 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:09:43.379 10:37:09 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:09:43.379 10:37:09 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:09:43.379 10:37:09 -- spdk/autotest.sh@394 -- # hostname 00:09:43.379 10:37:09 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-39 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:09:43.379 geninfo: WARNING: invalid characters removed from testname! 00:09:47.568 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:09:52.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:09:57.031 10:37:22 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:10:05.149 10:37:30 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:10:10.418 10:37:35 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:10:15.688 10:37:41 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:10:20.957 10:37:46 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:10:26.226 10:37:51 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:10:31.498 10:37:57 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:10:31.498 10:37:57 -- spdk/autorun.sh@1 -- $ timing_finish 00:10:31.498 10:37:57 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt ]] 00:10:31.498 10:37:57 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:10:31.498 10:37:57 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:10:31.498 10:37:57 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:10:31.498 + [[ -n 2733837 ]] 00:10:31.498 + sudo kill 2733837 00:10:31.507 [Pipeline] } 00:10:31.522 [Pipeline] // stage 00:10:31.527 [Pipeline] } 00:10:31.541 [Pipeline] // timeout 00:10:31.546 [Pipeline] } 00:10:31.560 [Pipeline] // catchError 00:10:31.565 [Pipeline] } 00:10:31.579 [Pipeline] // wrap 00:10:31.585 [Pipeline] } 00:10:31.597 [Pipeline] // catchError 00:10:31.606 [Pipeline] stage 00:10:31.609 [Pipeline] { (Epilogue) 00:10:31.621 [Pipeline] catchError 00:10:31.623 [Pipeline] { 00:10:31.635 [Pipeline] echo 00:10:31.637 Cleanup processes 00:10:31.643 [Pipeline] sh 00:10:31.926 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:31.926 2879379 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:31.940 [Pipeline] sh 00:10:32.223 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:32.223 ++ grep -v 'sudo pgrep' 00:10:32.223 ++ awk '{print $1}' 00:10:32.223 + sudo kill -9 00:10:32.223 + true 00:10:32.234 [Pipeline] sh 00:10:32.517 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:10:50.697 [Pipeline] sh 00:10:50.980 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:10:50.980 Artifacts sizes are good 00:10:50.995 [Pipeline] archiveArtifacts 00:10:51.003 Archiving artifacts 00:10:51.156 [Pipeline] sh 00:10:51.443 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:10:51.457 [Pipeline] cleanWs 00:10:51.467 [WS-CLEANUP] Deleting project workspace... 00:10:51.467 [WS-CLEANUP] Deferred wipeout is used... 00:10:51.473 [WS-CLEANUP] done 00:10:51.475 [Pipeline] } 00:10:51.492 [Pipeline] // catchError 00:10:51.505 [Pipeline] sh 00:10:51.787 + logger -p user.info -t JENKINS-CI 00:10:51.795 [Pipeline] } 00:10:51.806 [Pipeline] // stage 00:10:51.811 [Pipeline] } 00:10:51.824 [Pipeline] // node 00:10:51.828 [Pipeline] End of Pipeline 00:10:51.871 Finished: SUCCESS