00:00:00.001 Started by upstream project "autotest-per-patch" build number 132303 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.032 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.033 The recommended git tool is: git 00:00:00.033 using credential 00000000-0000-0000-0000-000000000002 00:00:00.036 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.051 Fetching changes from the remote Git repository 00:00:00.054 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.072 Using shallow fetch with depth 1 00:00:00.072 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.072 > git --version # timeout=10 00:00:00.101 > git --version # 'git version 2.39.2' 00:00:00.101 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.152 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.152 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.055 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.071 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.086 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:03.086 > git config core.sparsecheckout # timeout=10 00:00:03.099 > git read-tree -mu HEAD # timeout=10 00:00:03.116 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:03.135 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:03.135 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:03.241 [Pipeline] Start of Pipeline 00:00:03.255 [Pipeline] library 00:00:03.257 Loading library shm_lib@master 00:00:03.257 Library shm_lib@master is cached. Copying from home. 00:00:03.273 [Pipeline] node 00:00:03.286 Running on WFP39 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:03.288 [Pipeline] { 00:00:03.299 [Pipeline] catchError 00:00:03.301 [Pipeline] { 00:00:03.315 [Pipeline] wrap 00:00:03.323 [Pipeline] { 00:00:03.333 [Pipeline] stage 00:00:03.335 [Pipeline] { (Prologue) 00:00:03.593 [Pipeline] sh 00:00:03.879 + logger -p user.info -t JENKINS-CI 00:00:03.897 [Pipeline] echo 00:00:03.899 Node: WFP39 00:00:03.905 [Pipeline] sh 00:00:04.205 [Pipeline] setCustomBuildProperty 00:00:04.214 [Pipeline] echo 00:00:04.215 Cleanup processes 00:00:04.218 [Pipeline] sh 00:00:04.500 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.500 544528 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.544 [Pipeline] sh 00:00:04.823 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.823 ++ grep -v 'sudo pgrep' 00:00:04.823 ++ awk '{print $1}' 00:00:04.823 + sudo kill -9 00:00:04.823 + true 00:00:04.834 [Pipeline] cleanWs 00:00:04.841 [WS-CLEANUP] Deleting project workspace... 00:00:04.842 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.848 [WS-CLEANUP] done 00:00:04.852 [Pipeline] setCustomBuildProperty 00:00:04.864 [Pipeline] sh 00:00:05.145 + sudo git config --global --replace-all safe.directory '*' 00:00:05.236 [Pipeline] httpRequest 00:00:07.618 [Pipeline] echo 00:00:07.619 Sorcerer 10.211.164.101 is alive 00:00:07.628 [Pipeline] retry 00:00:07.630 [Pipeline] { 00:00:07.643 [Pipeline] httpRequest 00:00:07.647 HttpMethod: GET 00:00:07.647 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.648 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.661 Response Code: HTTP/1.1 200 OK 00:00:07.661 Success: Status code 200 is in the accepted range: 200,404 00:00:07.662 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.196 [Pipeline] } 00:00:09.213 [Pipeline] // retry 00:00:09.221 [Pipeline] sh 00:00:09.507 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.522 [Pipeline] httpRequest 00:00:11.985 [Pipeline] echo 00:00:11.987 Sorcerer 10.211.164.101 is alive 00:00:11.997 [Pipeline] retry 00:00:11.999 [Pipeline] { 00:00:12.013 [Pipeline] httpRequest 00:00:12.017 HttpMethod: GET 00:00:12.018 URL: http://10.211.164.101/packages/spdk_c46ddd981d9f69655d9cfd0fa085e903e0764826.tar.gz 00:00:12.018 Sending request to url: http://10.211.164.101/packages/spdk_c46ddd981d9f69655d9cfd0fa085e903e0764826.tar.gz 00:00:12.024 Response Code: HTTP/1.1 200 OK 00:00:12.024 Success: Status code 200 is in the accepted range: 200,404 00:00:12.025 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_c46ddd981d9f69655d9cfd0fa085e903e0764826.tar.gz 00:00:35.113 [Pipeline] } 00:00:35.130 [Pipeline] // retry 00:00:35.140 [Pipeline] sh 00:00:35.422 + tar --no-same-owner -xf spdk_c46ddd981d9f69655d9cfd0fa085e903e0764826.tar.gz 00:00:37.968 [Pipeline] sh 00:00:38.253 + git -C spdk log --oneline -n5 00:00:38.253 c46ddd981 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:00:38.253 4bcab9fb9 correct kick for CQ full case 00:00:38.253 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:00:38.253 318515b44 nvme/perf: interrupt mode support for pcie controller 00:00:38.253 7bc1134d6 test/scheduler: Read PID's status file only once 00:00:38.263 [Pipeline] } 00:00:38.274 [Pipeline] // stage 00:00:38.283 [Pipeline] stage 00:00:38.284 [Pipeline] { (Prepare) 00:00:38.299 [Pipeline] writeFile 00:00:38.315 [Pipeline] sh 00:00:38.598 + logger -p user.info -t JENKINS-CI 00:00:38.610 [Pipeline] sh 00:00:38.895 + logger -p user.info -t JENKINS-CI 00:00:38.908 [Pipeline] sh 00:00:39.188 + cat autorun-spdk.conf 00:00:39.188 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.188 SPDK_TEST_FUZZER_SHORT=1 00:00:39.188 SPDK_TEST_FUZZER=1 00:00:39.188 SPDK_TEST_SETUP=1 00:00:39.188 SPDK_RUN_UBSAN=1 00:00:39.195 RUN_NIGHTLY=0 00:00:39.200 [Pipeline] readFile 00:00:39.231 [Pipeline] withEnv 00:00:39.233 [Pipeline] { 00:00:39.251 [Pipeline] sh 00:00:39.538 + set -ex 00:00:39.538 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:39.538 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:39.538 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.538 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:39.538 ++ SPDK_TEST_FUZZER=1 00:00:39.538 ++ SPDK_TEST_SETUP=1 00:00:39.538 ++ SPDK_RUN_UBSAN=1 00:00:39.538 ++ RUN_NIGHTLY=0 00:00:39.538 + case $SPDK_TEST_NVMF_NICS in 00:00:39.538 + DRIVERS= 00:00:39.538 + [[ -n '' ]] 00:00:39.538 + exit 0 00:00:39.547 [Pipeline] } 00:00:39.564 [Pipeline] // withEnv 00:00:39.571 [Pipeline] } 00:00:39.588 [Pipeline] // stage 00:00:39.601 [Pipeline] catchError 00:00:39.603 [Pipeline] { 00:00:39.618 [Pipeline] timeout 00:00:39.618 Timeout set to expire in 30 min 00:00:39.620 [Pipeline] { 00:00:39.634 [Pipeline] stage 00:00:39.636 [Pipeline] { (Tests) 00:00:39.652 [Pipeline] sh 00:00:39.937 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:39.937 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:39.937 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:39.937 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:39.937 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:39.937 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:39.937 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:39.937 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:39.937 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:39.937 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:39.937 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:00:39.937 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:39.937 + source /etc/os-release 00:00:39.937 ++ NAME='Fedora Linux' 00:00:39.937 ++ VERSION='39 (Cloud Edition)' 00:00:39.937 ++ ID=fedora 00:00:39.937 ++ VERSION_ID=39 00:00:39.937 ++ VERSION_CODENAME= 00:00:39.937 ++ PLATFORM_ID=platform:f39 00:00:39.937 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:39.937 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:39.937 ++ LOGO=fedora-logo-icon 00:00:39.937 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:39.937 ++ HOME_URL=https://fedoraproject.org/ 00:00:39.937 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:39.937 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:39.938 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:39.938 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:39.938 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:39.938 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:39.938 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:39.938 ++ SUPPORT_END=2024-11-12 00:00:39.938 ++ VARIANT='Cloud Edition' 00:00:39.938 ++ VARIANT_ID=cloud 00:00:39.938 + uname -a 00:00:39.938 Linux spdk-wfp-39 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:00:39.938 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:44.133 Hugepages 00:00:44.133 node hugesize free / total 00:00:44.133 node0 1048576kB 0 / 0 00:00:44.133 node0 2048kB 0 / 0 00:00:44.133 node1 1048576kB 0 / 0 00:00:44.133 node1 2048kB 0 / 0 00:00:44.133 00:00:44.133 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:44.133 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:44.133 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:44.133 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:44.133 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:44.133 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:44.133 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:44.133 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:44.133 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:44.133 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:44.133 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:44.133 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:44.133 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:44.133 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:44.133 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:44.133 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:44.133 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:44.133 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:44.133 + rm -f /tmp/spdk-ld-path 00:00:44.133 + source autorun-spdk.conf 00:00:44.133 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.133 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:44.133 ++ SPDK_TEST_FUZZER=1 00:00:44.133 ++ SPDK_TEST_SETUP=1 00:00:44.133 ++ SPDK_RUN_UBSAN=1 00:00:44.133 ++ RUN_NIGHTLY=0 00:00:44.133 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:44.133 + [[ -n '' ]] 00:00:44.133 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:44.133 + for M in /var/spdk/build-*-manifest.txt 00:00:44.133 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:44.133 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:44.133 + for M in /var/spdk/build-*-manifest.txt 00:00:44.133 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:44.133 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:44.133 + for M in /var/spdk/build-*-manifest.txt 00:00:44.133 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:44.133 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:44.133 ++ uname 00:00:44.133 + [[ Linux == \L\i\n\u\x ]] 00:00:44.133 + sudo dmesg -T 00:00:44.133 + sudo dmesg --clear 00:00:44.133 + dmesg_pid=545568 00:00:44.133 + [[ Fedora Linux == FreeBSD ]] 00:00:44.133 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:44.133 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:44.133 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:44.133 + [[ -x /usr/src/fio-static/fio ]] 00:00:44.133 + export FIO_BIN=/usr/src/fio-static/fio 00:00:44.133 + FIO_BIN=/usr/src/fio-static/fio 00:00:44.133 + sudo dmesg -Tw 00:00:44.133 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:44.133 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:44.133 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:44.133 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:44.133 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:44.133 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:44.133 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:44.133 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:44.133 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:44.133 12:22:24 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:44.133 12:22:24 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:44.133 12:22:24 -- short-fuzz-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.133 12:22:24 -- short-fuzz-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_FUZZER_SHORT=1 00:00:44.133 12:22:24 -- short-fuzz-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_FUZZER=1 00:00:44.133 12:22:24 -- short-fuzz-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_SETUP=1 00:00:44.133 12:22:24 -- short-fuzz-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:00:44.133 12:22:24 -- short-fuzz-phy-autotest/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:00:44.133 12:22:24 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:00:44.133 12:22:24 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:44.133 12:22:24 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:44.133 12:22:24 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:44.133 12:22:24 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:44.133 12:22:24 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:44.133 12:22:24 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:44.133 12:22:24 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:44.133 12:22:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.133 12:22:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.133 12:22:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.133 12:22:24 -- paths/export.sh@5 -- $ export PATH 00:00:44.133 12:22:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.133 12:22:24 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:44.133 12:22:24 -- common/autobuild_common.sh@486 -- $ date +%s 00:00:44.133 12:22:24 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731669744.XXXXXX 00:00:44.133 12:22:24 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731669744.lv1N5R 00:00:44.133 12:22:24 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:00:44.133 12:22:24 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:00:44.133 12:22:24 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:44.133 12:22:24 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:44.133 12:22:24 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:44.133 12:22:24 -- common/autobuild_common.sh@502 -- $ get_config_params 00:00:44.133 12:22:24 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:00:44.133 12:22:24 -- common/autotest_common.sh@10 -- $ set +x 00:00:44.134 12:22:24 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:44.134 12:22:24 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:00:44.134 12:22:24 -- pm/common@17 -- $ local monitor 00:00:44.134 12:22:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:44.134 12:22:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:44.134 12:22:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:44.134 12:22:24 -- pm/common@21 -- $ date +%s 00:00:44.134 12:22:24 -- pm/common@21 -- $ date +%s 00:00:44.134 12:22:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:44.134 12:22:24 -- pm/common@25 -- $ sleep 1 00:00:44.134 12:22:24 -- pm/common@21 -- $ date +%s 00:00:44.134 12:22:24 -- pm/common@21 -- $ date +%s 00:00:44.134 12:22:24 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731669744 00:00:44.134 12:22:24 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731669744 00:00:44.134 12:22:24 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731669744 00:00:44.134 12:22:24 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731669744 00:00:44.134 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731669744_collect-cpu-temp.pm.log 00:00:44.134 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731669744_collect-cpu-load.pm.log 00:00:44.134 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731669744_collect-vmstat.pm.log 00:00:44.134 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731669744_collect-bmc-pm.bmc.pm.log 00:00:45.069 12:22:25 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:00:45.069 12:22:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:45.069 12:22:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:45.069 12:22:25 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:45.069 12:22:25 -- spdk/autobuild.sh@16 -- $ date -u 00:00:45.069 Fri Nov 15 11:22:25 AM UTC 2024 00:00:45.069 12:22:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:45.069 v25.01-pre-188-gc46ddd981 00:00:45.069 12:22:25 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:45.069 12:22:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:45.069 12:22:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:45.069 12:22:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:00:45.069 12:22:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:45.069 12:22:25 -- common/autotest_common.sh@10 -- $ set +x 00:00:45.069 ************************************ 00:00:45.069 START TEST ubsan 00:00:45.069 ************************************ 00:00:45.069 12:22:25 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:00:45.069 using ubsan 00:00:45.069 00:00:45.069 real 0m0.001s 00:00:45.069 user 0m0.001s 00:00:45.069 sys 0m0.000s 00:00:45.069 12:22:25 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:00:45.069 12:22:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:45.069 ************************************ 00:00:45.069 END TEST ubsan 00:00:45.069 ************************************ 00:00:45.069 12:22:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:45.069 12:22:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:45.069 12:22:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:45.069 12:22:25 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:45.069 12:22:25 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:45.069 12:22:25 -- common/autobuild_common.sh@438 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:45.069 12:22:25 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:00:45.069 12:22:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:45.069 12:22:25 -- common/autotest_common.sh@10 -- $ set +x 00:00:45.069 ************************************ 00:00:45.069 START TEST autobuild_llvm_precompile 00:00:45.069 ************************************ 00:00:45.069 12:22:25 autobuild_llvm_precompile -- common/autotest_common.sh@1129 -- $ _llvm_precompile 00:00:45.069 12:22:25 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:00:45.069 12:22:25 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:00:45.069 Target: x86_64-redhat-linux-gnu 00:00:45.069 Thread model: posix 00:00:45.069 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:45.069 12:22:25 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:00:45.069 12:22:25 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:00:45.069 12:22:25 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:00:45.069 12:22:25 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:00:45.069 12:22:25 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:00:45.069 12:22:25 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:45.069 12:22:25 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:00:45.069 12:22:25 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:00:45.069 12:22:25 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:00:45.069 12:22:25 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:00:45.326 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:45.326 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:45.890 Using 'verbs' RDMA provider 00:01:01.704 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:13.902 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:14.726 Creating mk/config.mk...done. 00:01:14.726 Creating mk/cc.flags.mk...done. 00:01:14.726 Type 'make' to build. 00:01:14.726 00:01:14.726 real 0m29.537s 00:01:14.726 user 0m13.151s 00:01:14.726 sys 0m15.655s 00:01:14.726 12:22:54 autobuild_llvm_precompile -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:14.726 12:22:54 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:14.726 ************************************ 00:01:14.726 END TEST autobuild_llvm_precompile 00:01:14.726 ************************************ 00:01:14.726 12:22:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:14.726 12:22:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:14.726 12:22:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:14.726 12:22:54 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:14.726 12:22:54 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:14.984 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:14.984 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:15.241 Using 'verbs' RDMA provider 00:01:28.880 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:38.842 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:39.664 Creating mk/config.mk...done. 00:01:39.664 Creating mk/cc.flags.mk...done. 00:01:39.664 Type 'make' to build. 00:01:39.664 12:23:19 -- spdk/autobuild.sh@70 -- $ run_test make make -j72 00:01:39.664 12:23:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:39.664 12:23:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:39.664 12:23:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.664 ************************************ 00:01:39.664 START TEST make 00:01:39.664 ************************************ 00:01:39.664 12:23:19 make -- common/autotest_common.sh@1129 -- $ make -j72 00:01:39.922 make[1]: Nothing to be done for 'all'. 00:01:41.827 The Meson build system 00:01:41.828 Version: 1.5.0 00:01:41.828 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:41.828 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.828 Build type: native build 00:01:41.828 Project name: libvfio-user 00:01:41.828 Project version: 0.0.1 00:01:41.828 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:01:41.828 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:01:41.828 Host machine cpu family: x86_64 00:01:41.828 Host machine cpu: x86_64 00:01:41.828 Run-time dependency threads found: YES 00:01:41.828 Library dl found: YES 00:01:41.828 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:41.828 Run-time dependency json-c found: YES 0.17 00:01:41.828 Run-time dependency cmocka found: YES 1.1.7 00:01:41.828 Program pytest-3 found: NO 00:01:41.828 Program flake8 found: NO 00:01:41.828 Program misspell-fixer found: NO 00:01:41.828 Program restructuredtext-lint found: NO 00:01:41.828 Program valgrind found: YES (/usr/bin/valgrind) 00:01:41.828 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:41.828 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:41.828 Compiler for C supports arguments -Wwrite-strings: YES 00:01:41.828 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:41.828 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:41.828 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:41.828 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:41.828 Build targets in project: 8 00:01:41.828 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:41.828 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:41.828 00:01:41.828 libvfio-user 0.0.1 00:01:41.828 00:01:41.828 User defined options 00:01:41.828 buildtype : debug 00:01:41.828 default_library: static 00:01:41.828 libdir : /usr/local/lib 00:01:41.828 00:01:41.828 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:41.828 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:42.085 [1/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:42.085 [2/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:42.085 [3/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:42.085 [4/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:42.085 [5/36] Compiling C object samples/null.p/null.c.o 00:01:42.085 [6/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:42.085 [7/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:42.085 [8/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:42.085 [9/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:42.085 [10/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:42.085 [11/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:42.085 [12/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:42.085 [13/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:42.085 [14/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:42.085 [15/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:42.085 [16/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:42.085 [17/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:42.085 [18/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:42.085 [19/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:42.085 [20/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:42.085 [21/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:42.085 [22/36] Compiling C object samples/server.p/server.c.o 00:01:42.085 [23/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:42.085 [24/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:42.085 [25/36] Compiling C object samples/client.p/client.c.o 00:01:42.085 [26/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:42.085 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:42.085 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:42.085 [29/36] Linking static target lib/libvfio-user.a 00:01:42.085 [30/36] Linking target samples/client 00:01:42.085 [31/36] Linking target test/unit_tests 00:01:42.085 [32/36] Linking target samples/lspci 00:01:42.085 [33/36] Linking target samples/null 00:01:42.085 [34/36] Linking target samples/server 00:01:42.085 [35/36] Linking target samples/shadow_ioeventfd_server 00:01:42.085 [36/36] Linking target samples/gpio-pci-idio-16 00:01:42.085 INFO: autodetecting backend as ninja 00:01:42.085 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:42.085 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:42.659 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:42.659 ninja: no work to do. 00:01:47.916 The Meson build system 00:01:47.916 Version: 1.5.0 00:01:47.916 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:01:47.916 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:01:47.916 Build type: native build 00:01:47.916 Program cat found: YES (/usr/bin/cat) 00:01:47.916 Project name: DPDK 00:01:47.916 Project version: 24.03.0 00:01:47.916 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:01:47.916 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:01:47.916 Host machine cpu family: x86_64 00:01:47.916 Host machine cpu: x86_64 00:01:47.916 Message: ## Building in Developer Mode ## 00:01:47.916 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:47.916 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:47.916 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:47.916 Program python3 found: YES (/usr/bin/python3) 00:01:47.916 Program cat found: YES (/usr/bin/cat) 00:01:47.916 Compiler for C supports arguments -march=native: YES 00:01:47.916 Checking for size of "void *" : 8 00:01:47.916 Checking for size of "void *" : 8 (cached) 00:01:47.916 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:47.916 Library m found: YES 00:01:47.916 Library numa found: YES 00:01:47.916 Has header "numaif.h" : YES 00:01:47.916 Library fdt found: NO 00:01:47.916 Library execinfo found: NO 00:01:47.916 Has header "execinfo.h" : YES 00:01:47.916 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:47.916 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:47.916 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:47.916 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:47.916 Run-time dependency openssl found: YES 3.1.1 00:01:47.916 Run-time dependency libpcap found: YES 1.10.4 00:01:47.916 Has header "pcap.h" with dependency libpcap: YES 00:01:47.916 Compiler for C supports arguments -Wcast-qual: YES 00:01:47.916 Compiler for C supports arguments -Wdeprecated: YES 00:01:47.916 Compiler for C supports arguments -Wformat: YES 00:01:47.916 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:47.916 Compiler for C supports arguments -Wformat-security: YES 00:01:47.916 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:47.916 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:47.916 Compiler for C supports arguments -Wnested-externs: YES 00:01:47.916 Compiler for C supports arguments -Wold-style-definition: YES 00:01:47.916 Compiler for C supports arguments -Wpointer-arith: YES 00:01:47.916 Compiler for C supports arguments -Wsign-compare: YES 00:01:47.916 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:47.916 Compiler for C supports arguments -Wundef: YES 00:01:47.916 Compiler for C supports arguments -Wwrite-strings: YES 00:01:47.916 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:47.916 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:47.916 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:47.916 Program objdump found: YES (/usr/bin/objdump) 00:01:47.916 Compiler for C supports arguments -mavx512f: YES 00:01:47.916 Checking if "AVX512 checking" compiles: YES 00:01:47.916 Fetching value of define "__SSE4_2__" : 1 00:01:47.916 Fetching value of define "__AES__" : 1 00:01:47.916 Fetching value of define "__AVX__" : 1 00:01:47.916 Fetching value of define "__AVX2__" : 1 00:01:47.916 Fetching value of define "__AVX512BW__" : 1 00:01:47.916 Fetching value of define "__AVX512CD__" : 1 00:01:47.916 Fetching value of define "__AVX512DQ__" : 1 00:01:47.916 Fetching value of define "__AVX512F__" : 1 00:01:47.916 Fetching value of define "__AVX512VL__" : 1 00:01:47.916 Fetching value of define "__PCLMUL__" : 1 00:01:47.916 Fetching value of define "__RDRND__" : 1 00:01:47.916 Fetching value of define "__RDSEED__" : 1 00:01:47.916 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:47.916 Fetching value of define "__znver1__" : (undefined) 00:01:47.916 Fetching value of define "__znver2__" : (undefined) 00:01:47.916 Fetching value of define "__znver3__" : (undefined) 00:01:47.916 Fetching value of define "__znver4__" : (undefined) 00:01:47.916 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:47.916 Message: lib/log: Defining dependency "log" 00:01:47.916 Message: lib/kvargs: Defining dependency "kvargs" 00:01:47.916 Message: lib/telemetry: Defining dependency "telemetry" 00:01:47.916 Checking for function "getentropy" : NO 00:01:47.916 Message: lib/eal: Defining dependency "eal" 00:01:47.916 Message: lib/ring: Defining dependency "ring" 00:01:47.916 Message: lib/rcu: Defining dependency "rcu" 00:01:47.916 Message: lib/mempool: Defining dependency "mempool" 00:01:47.916 Message: lib/mbuf: Defining dependency "mbuf" 00:01:47.916 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:47.916 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:47.916 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:47.916 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:47.916 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:47.916 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:47.916 Compiler for C supports arguments -mpclmul: YES 00:01:47.916 Compiler for C supports arguments -maes: YES 00:01:47.916 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:47.916 Compiler for C supports arguments -mavx512bw: YES 00:01:47.916 Compiler for C supports arguments -mavx512dq: YES 00:01:47.916 Compiler for C supports arguments -mavx512vl: YES 00:01:47.917 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:47.917 Compiler for C supports arguments -mavx2: YES 00:01:47.917 Compiler for C supports arguments -mavx: YES 00:01:47.917 Message: lib/net: Defining dependency "net" 00:01:47.917 Message: lib/meter: Defining dependency "meter" 00:01:47.917 Message: lib/ethdev: Defining dependency "ethdev" 00:01:47.917 Message: lib/pci: Defining dependency "pci" 00:01:47.917 Message: lib/cmdline: Defining dependency "cmdline" 00:01:47.917 Message: lib/hash: Defining dependency "hash" 00:01:47.917 Message: lib/timer: Defining dependency "timer" 00:01:47.917 Message: lib/compressdev: Defining dependency "compressdev" 00:01:47.917 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:47.917 Message: lib/dmadev: Defining dependency "dmadev" 00:01:47.917 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:47.917 Message: lib/power: Defining dependency "power" 00:01:47.917 Message: lib/reorder: Defining dependency "reorder" 00:01:47.917 Message: lib/security: Defining dependency "security" 00:01:47.917 Has header "linux/userfaultfd.h" : YES 00:01:47.917 Has header "linux/vduse.h" : YES 00:01:47.917 Message: lib/vhost: Defining dependency "vhost" 00:01:47.917 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:47.917 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:47.917 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:47.917 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:47.917 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:47.917 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:47.917 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:47.917 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:47.917 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:47.917 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:47.917 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:47.917 Configuring doxy-api-html.conf using configuration 00:01:47.917 Configuring doxy-api-man.conf using configuration 00:01:47.917 Program mandb found: YES (/usr/bin/mandb) 00:01:47.917 Program sphinx-build found: NO 00:01:47.917 Configuring rte_build_config.h using configuration 00:01:47.917 Message: 00:01:47.917 ================= 00:01:47.917 Applications Enabled 00:01:47.917 ================= 00:01:47.917 00:01:47.917 apps: 00:01:47.917 00:01:47.917 00:01:47.917 Message: 00:01:47.917 ================= 00:01:47.917 Libraries Enabled 00:01:47.917 ================= 00:01:47.917 00:01:47.917 libs: 00:01:47.917 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:47.917 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:47.917 cryptodev, dmadev, power, reorder, security, vhost, 00:01:47.917 00:01:47.917 Message: 00:01:47.917 =============== 00:01:47.917 Drivers Enabled 00:01:47.917 =============== 00:01:47.917 00:01:47.917 common: 00:01:47.917 00:01:47.917 bus: 00:01:47.917 pci, vdev, 00:01:47.917 mempool: 00:01:47.917 ring, 00:01:47.917 dma: 00:01:47.917 00:01:47.917 net: 00:01:47.917 00:01:47.917 crypto: 00:01:47.917 00:01:47.917 compress: 00:01:47.917 00:01:47.917 vdpa: 00:01:47.917 00:01:47.917 00:01:47.917 Message: 00:01:47.917 ================= 00:01:47.917 Content Skipped 00:01:47.917 ================= 00:01:47.917 00:01:47.917 apps: 00:01:47.917 dumpcap: explicitly disabled via build config 00:01:47.917 graph: explicitly disabled via build config 00:01:47.917 pdump: explicitly disabled via build config 00:01:47.917 proc-info: explicitly disabled via build config 00:01:47.917 test-acl: explicitly disabled via build config 00:01:47.917 test-bbdev: explicitly disabled via build config 00:01:47.917 test-cmdline: explicitly disabled via build config 00:01:47.917 test-compress-perf: explicitly disabled via build config 00:01:47.917 test-crypto-perf: explicitly disabled via build config 00:01:47.917 test-dma-perf: explicitly disabled via build config 00:01:47.917 test-eventdev: explicitly disabled via build config 00:01:47.917 test-fib: explicitly disabled via build config 00:01:47.917 test-flow-perf: explicitly disabled via build config 00:01:47.917 test-gpudev: explicitly disabled via build config 00:01:47.917 test-mldev: explicitly disabled via build config 00:01:47.917 test-pipeline: explicitly disabled via build config 00:01:47.917 test-pmd: explicitly disabled via build config 00:01:47.917 test-regex: explicitly disabled via build config 00:01:47.917 test-sad: explicitly disabled via build config 00:01:47.917 test-security-perf: explicitly disabled via build config 00:01:47.917 00:01:47.917 libs: 00:01:47.917 argparse: explicitly disabled via build config 00:01:47.917 metrics: explicitly disabled via build config 00:01:47.917 acl: explicitly disabled via build config 00:01:47.917 bbdev: explicitly disabled via build config 00:01:47.917 bitratestats: explicitly disabled via build config 00:01:47.917 bpf: explicitly disabled via build config 00:01:47.917 cfgfile: explicitly disabled via build config 00:01:47.917 distributor: explicitly disabled via build config 00:01:47.917 efd: explicitly disabled via build config 00:01:47.917 eventdev: explicitly disabled via build config 00:01:47.917 dispatcher: explicitly disabled via build config 00:01:47.917 gpudev: explicitly disabled via build config 00:01:47.917 gro: explicitly disabled via build config 00:01:47.917 gso: explicitly disabled via build config 00:01:47.917 ip_frag: explicitly disabled via build config 00:01:47.917 jobstats: explicitly disabled via build config 00:01:47.917 latencystats: explicitly disabled via build config 00:01:47.917 lpm: explicitly disabled via build config 00:01:47.917 member: explicitly disabled via build config 00:01:47.917 pcapng: explicitly disabled via build config 00:01:47.917 rawdev: explicitly disabled via build config 00:01:47.917 regexdev: explicitly disabled via build config 00:01:47.917 mldev: explicitly disabled via build config 00:01:47.917 rib: explicitly disabled via build config 00:01:47.917 sched: explicitly disabled via build config 00:01:47.917 stack: explicitly disabled via build config 00:01:47.917 ipsec: explicitly disabled via build config 00:01:47.917 pdcp: explicitly disabled via build config 00:01:47.917 fib: explicitly disabled via build config 00:01:47.917 port: explicitly disabled via build config 00:01:47.917 pdump: explicitly disabled via build config 00:01:47.917 table: explicitly disabled via build config 00:01:47.917 pipeline: explicitly disabled via build config 00:01:47.917 graph: explicitly disabled via build config 00:01:47.917 node: explicitly disabled via build config 00:01:47.917 00:01:47.917 drivers: 00:01:47.917 common/cpt: not in enabled drivers build config 00:01:47.917 common/dpaax: not in enabled drivers build config 00:01:47.917 common/iavf: not in enabled drivers build config 00:01:47.917 common/idpf: not in enabled drivers build config 00:01:47.917 common/ionic: not in enabled drivers build config 00:01:47.917 common/mvep: not in enabled drivers build config 00:01:47.917 common/octeontx: not in enabled drivers build config 00:01:47.917 bus/auxiliary: not in enabled drivers build config 00:01:47.917 bus/cdx: not in enabled drivers build config 00:01:47.917 bus/dpaa: not in enabled drivers build config 00:01:47.917 bus/fslmc: not in enabled drivers build config 00:01:47.917 bus/ifpga: not in enabled drivers build config 00:01:47.917 bus/platform: not in enabled drivers build config 00:01:47.917 bus/uacce: not in enabled drivers build config 00:01:47.917 bus/vmbus: not in enabled drivers build config 00:01:47.917 common/cnxk: not in enabled drivers build config 00:01:47.917 common/mlx5: not in enabled drivers build config 00:01:47.917 common/nfp: not in enabled drivers build config 00:01:47.917 common/nitrox: not in enabled drivers build config 00:01:47.917 common/qat: not in enabled drivers build config 00:01:47.917 common/sfc_efx: not in enabled drivers build config 00:01:47.917 mempool/bucket: not in enabled drivers build config 00:01:47.917 mempool/cnxk: not in enabled drivers build config 00:01:47.917 mempool/dpaa: not in enabled drivers build config 00:01:47.917 mempool/dpaa2: not in enabled drivers build config 00:01:47.917 mempool/octeontx: not in enabled drivers build config 00:01:47.917 mempool/stack: not in enabled drivers build config 00:01:47.917 dma/cnxk: not in enabled drivers build config 00:01:47.917 dma/dpaa: not in enabled drivers build config 00:01:47.917 dma/dpaa2: not in enabled drivers build config 00:01:47.917 dma/hisilicon: not in enabled drivers build config 00:01:47.918 dma/idxd: not in enabled drivers build config 00:01:47.918 dma/ioat: not in enabled drivers build config 00:01:47.918 dma/skeleton: not in enabled drivers build config 00:01:47.918 net/af_packet: not in enabled drivers build config 00:01:47.918 net/af_xdp: not in enabled drivers build config 00:01:47.918 net/ark: not in enabled drivers build config 00:01:47.918 net/atlantic: not in enabled drivers build config 00:01:47.918 net/avp: not in enabled drivers build config 00:01:47.918 net/axgbe: not in enabled drivers build config 00:01:47.918 net/bnx2x: not in enabled drivers build config 00:01:47.918 net/bnxt: not in enabled drivers build config 00:01:47.918 net/bonding: not in enabled drivers build config 00:01:47.918 net/cnxk: not in enabled drivers build config 00:01:47.918 net/cpfl: not in enabled drivers build config 00:01:47.918 net/cxgbe: not in enabled drivers build config 00:01:47.918 net/dpaa: not in enabled drivers build config 00:01:47.918 net/dpaa2: not in enabled drivers build config 00:01:47.918 net/e1000: not in enabled drivers build config 00:01:47.918 net/ena: not in enabled drivers build config 00:01:47.918 net/enetc: not in enabled drivers build config 00:01:47.918 net/enetfec: not in enabled drivers build config 00:01:47.918 net/enic: not in enabled drivers build config 00:01:47.918 net/failsafe: not in enabled drivers build config 00:01:47.918 net/fm10k: not in enabled drivers build config 00:01:47.918 net/gve: not in enabled drivers build config 00:01:47.918 net/hinic: not in enabled drivers build config 00:01:47.918 net/hns3: not in enabled drivers build config 00:01:47.918 net/i40e: not in enabled drivers build config 00:01:47.918 net/iavf: not in enabled drivers build config 00:01:47.918 net/ice: not in enabled drivers build config 00:01:47.918 net/idpf: not in enabled drivers build config 00:01:47.918 net/igc: not in enabled drivers build config 00:01:47.918 net/ionic: not in enabled drivers build config 00:01:47.918 net/ipn3ke: not in enabled drivers build config 00:01:47.918 net/ixgbe: not in enabled drivers build config 00:01:47.918 net/mana: not in enabled drivers build config 00:01:47.918 net/memif: not in enabled drivers build config 00:01:47.918 net/mlx4: not in enabled drivers build config 00:01:47.918 net/mlx5: not in enabled drivers build config 00:01:47.918 net/mvneta: not in enabled drivers build config 00:01:47.918 net/mvpp2: not in enabled drivers build config 00:01:47.918 net/netvsc: not in enabled drivers build config 00:01:47.918 net/nfb: not in enabled drivers build config 00:01:47.918 net/nfp: not in enabled drivers build config 00:01:47.918 net/ngbe: not in enabled drivers build config 00:01:47.918 net/null: not in enabled drivers build config 00:01:47.918 net/octeontx: not in enabled drivers build config 00:01:47.918 net/octeon_ep: not in enabled drivers build config 00:01:47.918 net/pcap: not in enabled drivers build config 00:01:47.918 net/pfe: not in enabled drivers build config 00:01:47.918 net/qede: not in enabled drivers build config 00:01:47.918 net/ring: not in enabled drivers build config 00:01:47.918 net/sfc: not in enabled drivers build config 00:01:47.918 net/softnic: not in enabled drivers build config 00:01:47.918 net/tap: not in enabled drivers build config 00:01:47.918 net/thunderx: not in enabled drivers build config 00:01:47.918 net/txgbe: not in enabled drivers build config 00:01:47.918 net/vdev_netvsc: not in enabled drivers build config 00:01:47.918 net/vhost: not in enabled drivers build config 00:01:47.918 net/virtio: not in enabled drivers build config 00:01:47.918 net/vmxnet3: not in enabled drivers build config 00:01:47.918 raw/*: missing internal dependency, "rawdev" 00:01:47.918 crypto/armv8: not in enabled drivers build config 00:01:47.918 crypto/bcmfs: not in enabled drivers build config 00:01:47.918 crypto/caam_jr: not in enabled drivers build config 00:01:47.918 crypto/ccp: not in enabled drivers build config 00:01:47.918 crypto/cnxk: not in enabled drivers build config 00:01:47.918 crypto/dpaa_sec: not in enabled drivers build config 00:01:47.918 crypto/dpaa2_sec: not in enabled drivers build config 00:01:47.918 crypto/ipsec_mb: not in enabled drivers build config 00:01:47.918 crypto/mlx5: not in enabled drivers build config 00:01:47.918 crypto/mvsam: not in enabled drivers build config 00:01:47.918 crypto/nitrox: not in enabled drivers build config 00:01:47.918 crypto/null: not in enabled drivers build config 00:01:47.918 crypto/octeontx: not in enabled drivers build config 00:01:47.918 crypto/openssl: not in enabled drivers build config 00:01:47.918 crypto/scheduler: not in enabled drivers build config 00:01:47.918 crypto/uadk: not in enabled drivers build config 00:01:47.918 crypto/virtio: not in enabled drivers build config 00:01:47.918 compress/isal: not in enabled drivers build config 00:01:47.918 compress/mlx5: not in enabled drivers build config 00:01:47.918 compress/nitrox: not in enabled drivers build config 00:01:47.918 compress/octeontx: not in enabled drivers build config 00:01:47.918 compress/zlib: not in enabled drivers build config 00:01:47.918 regex/*: missing internal dependency, "regexdev" 00:01:47.918 ml/*: missing internal dependency, "mldev" 00:01:47.918 vdpa/ifc: not in enabled drivers build config 00:01:47.918 vdpa/mlx5: not in enabled drivers build config 00:01:47.918 vdpa/nfp: not in enabled drivers build config 00:01:47.918 vdpa/sfc: not in enabled drivers build config 00:01:47.918 event/*: missing internal dependency, "eventdev" 00:01:47.918 baseband/*: missing internal dependency, "bbdev" 00:01:47.918 gpu/*: missing internal dependency, "gpudev" 00:01:47.918 00:01:47.918 00:01:47.918 Build targets in project: 85 00:01:47.918 00:01:47.918 DPDK 24.03.0 00:01:47.918 00:01:47.918 User defined options 00:01:47.918 buildtype : debug 00:01:47.918 default_library : static 00:01:47.918 libdir : lib 00:01:47.918 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:47.918 c_args : -fPIC -Werror 00:01:47.918 c_link_args : 00:01:47.918 cpu_instruction_set: native 00:01:47.918 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:47.918 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:47.918 enable_docs : false 00:01:47.918 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:47.918 enable_kmods : false 00:01:47.918 max_lcores : 128 00:01:47.918 tests : false 00:01:47.918 00:01:47.918 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:48.181 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:01:48.181 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:48.181 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:48.181 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:48.181 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:48.181 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:48.447 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:48.447 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:48.447 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:48.447 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:48.447 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:48.447 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:48.447 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:48.447 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:48.447 [14/268] Linking static target lib/librte_kvargs.a 00:01:48.447 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:48.447 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:48.447 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:48.447 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:48.447 [19/268] Linking static target lib/librte_log.a 00:01:48.706 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:48.706 [21/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:48.706 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:48.706 [23/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:48.706 [24/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:48.706 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:48.706 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:48.706 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:48.706 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:48.706 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:48.706 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:48.706 [31/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:48.706 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:48.706 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:48.706 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:48.964 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:48.964 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:48.964 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:48.964 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:48.964 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:48.964 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:48.964 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:48.964 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:48.964 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:48.964 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:48.964 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:48.964 [46/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:48.964 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:48.964 [48/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:48.964 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:48.964 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:48.964 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:48.964 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:48.964 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:48.964 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:48.964 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:48.964 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:48.964 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:48.965 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:48.965 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:48.965 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:48.965 [61/268] Linking static target lib/librte_telemetry.a 00:01:48.965 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:48.965 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:48.965 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:48.965 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:48.965 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:48.965 [67/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:48.965 [68/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:48.965 [69/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:48.965 [70/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:48.965 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:48.965 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:48.965 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:48.965 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:48.965 [75/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:48.965 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:48.965 [77/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:48.965 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:48.965 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:48.965 [80/268] Linking static target lib/librte_pci.a 00:01:48.965 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:48.965 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:48.965 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:48.965 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:48.965 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:48.965 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:48.965 [87/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.965 [88/268] Linking static target lib/librte_ring.a 00:01:48.965 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:48.965 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:48.965 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:48.965 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:48.965 [93/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:48.965 [94/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:48.965 [95/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:48.965 [96/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:48.965 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:48.965 [98/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:48.965 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:48.965 [100/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:48.965 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:48.965 [102/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:48.965 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:48.965 [104/268] Linking static target lib/librte_rcu.a 00:01:49.228 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:49.228 [106/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:49.228 [107/268] Linking static target lib/librte_eal.a 00:01:49.228 [108/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:49.228 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:49.228 [110/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:49.228 [111/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:49.228 [112/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:49.228 [113/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:49.228 [114/268] Linking static target lib/librte_mempool.a 00:01:49.228 [115/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:49.228 [116/268] Linking static target lib/librte_mbuf.a 00:01:49.228 [117/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.228 [118/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.487 [119/268] Linking target lib/librte_log.so.24.1 00:01:49.487 [120/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:49.487 [121/268] Linking static target lib/librte_net.a 00:01:49.487 [122/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.487 [123/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:49.487 [124/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:49.487 [125/268] Linking static target lib/librte_meter.a 00:01:49.487 [126/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:49.487 [127/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:49.487 [128/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.487 [129/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:49.487 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:49.487 [131/268] Linking static target lib/librte_timer.a 00:01:49.487 [132/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:49.487 [133/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:49.487 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:49.487 [135/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:49.487 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:49.487 [137/268] Linking static target lib/librte_cmdline.a 00:01:49.487 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:49.487 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:49.487 [140/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:49.487 [141/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:49.487 [142/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:49.487 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:49.487 [144/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:49.487 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:49.487 [146/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:49.487 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:49.487 [148/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:49.487 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:49.487 [150/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:49.487 [151/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:49.487 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:49.487 [153/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.487 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:49.487 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:49.487 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:49.487 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:49.487 [158/268] Linking target lib/librte_kvargs.so.24.1 00:01:49.747 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:49.747 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:49.747 [161/268] Linking target lib/librte_telemetry.so.24.1 00:01:49.747 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:49.747 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:49.747 [164/268] Linking static target lib/librte_dmadev.a 00:01:49.747 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:49.747 [166/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:49.747 [167/268] Linking static target lib/librte_power.a 00:01:49.747 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:49.747 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:49.747 [170/268] Linking static target lib/librte_compressdev.a 00:01:49.747 [171/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:49.747 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:49.747 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:49.747 [174/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:49.747 [175/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.747 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:49.747 [177/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.747 [178/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:49.747 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:49.747 [180/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:49.747 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:49.747 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:49.747 [183/268] Linking static target lib/librte_hash.a 00:01:49.747 [184/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:49.747 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:49.747 [186/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:49.747 [187/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:49.747 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:49.747 [189/268] Linking static target lib/librte_reorder.a 00:01:49.747 [190/268] Linking static target lib/librte_security.a 00:01:49.747 [191/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:49.747 [192/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:49.747 [193/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:49.747 [194/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:49.747 [195/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:49.747 [196/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:49.747 [197/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:50.006 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:50.006 [199/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.006 [200/268] Linking static target lib/librte_cryptodev.a 00:01:50.006 [201/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:50.006 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.006 [203/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:50.006 [204/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.006 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:50.006 [206/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.006 [207/268] Linking static target drivers/librte_bus_vdev.a 00:01:50.006 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.006 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.006 [210/268] Linking static target drivers/librte_bus_pci.a 00:01:50.006 [211/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:50.006 [212/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.006 [213/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.006 [214/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.006 [215/268] Linking static target drivers/librte_mempool_ring.a 00:01:50.006 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:50.265 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.265 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:50.265 [219/268] Linking static target lib/librte_ethdev.a 00:01:50.265 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.265 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.265 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.523 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.523 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.523 [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.781 [226/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.781 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.781 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:50.781 [229/268] Linking static target lib/librte_vhost.a 00:01:52.156 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.722 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.285 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.814 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.814 [234/268] Linking target lib/librte_eal.so.24.1 00:02:01.814 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:01.814 [236/268] Linking target lib/librte_timer.so.24.1 00:02:01.814 [237/268] Linking target lib/librte_pci.so.24.1 00:02:01.814 [238/268] Linking target lib/librte_ring.so.24.1 00:02:01.814 [239/268] Linking target lib/librte_meter.so.24.1 00:02:01.814 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:01.815 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:01.815 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:01.815 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:01.815 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:01.815 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:01.815 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:01.815 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:01.815 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:01.815 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:02.072 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:02.072 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:02.072 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:02.072 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:02.072 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:02.329 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:02.329 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:02.329 [257/268] Linking target lib/librte_net.so.24.1 00:02:02.329 [258/268] Linking target lib/librte_reorder.so.24.1 00:02:02.329 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:02.329 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:02.329 [261/268] Linking target lib/librte_security.so.24.1 00:02:02.329 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:02.329 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:02.586 [264/268] Linking target lib/librte_hash.so.24.1 00:02:02.586 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:02.586 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:02.586 [267/268] Linking target lib/librte_vhost.so.24.1 00:02:02.586 [268/268] Linking target lib/librte_power.so.24.1 00:02:02.586 INFO: autodetecting backend as ninja 00:02:02.586 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:02:03.519 CC lib/ut_mock/mock.o 00:02:03.519 CC lib/ut/ut.o 00:02:03.519 CC lib/log/log.o 00:02:03.519 CC lib/log/log_flags.o 00:02:03.519 CC lib/log/log_deprecated.o 00:02:03.777 LIB libspdk_ut.a 00:02:03.777 LIB libspdk_log.a 00:02:03.777 LIB libspdk_ut_mock.a 00:02:04.035 CC lib/util/bit_array.o 00:02:04.035 CC lib/util/base64.o 00:02:04.035 CC lib/util/cpuset.o 00:02:04.035 CC lib/ioat/ioat.o 00:02:04.035 CXX lib/trace_parser/trace.o 00:02:04.035 CC lib/util/crc16.o 00:02:04.035 CC lib/util/crc32.o 00:02:04.035 CC lib/util/crc32c.o 00:02:04.035 CC lib/util/crc32_ieee.o 00:02:04.035 CC lib/util/crc64.o 00:02:04.035 CC lib/util/dif.o 00:02:04.035 CC lib/util/fd.o 00:02:04.035 CC lib/util/fd_group.o 00:02:04.035 CC lib/util/file.o 00:02:04.035 CC lib/util/iov.o 00:02:04.035 CC lib/util/hexlify.o 00:02:04.035 CC lib/dma/dma.o 00:02:04.035 CC lib/util/math.o 00:02:04.035 CC lib/util/net.o 00:02:04.035 CC lib/util/pipe.o 00:02:04.035 CC lib/util/strerror_tls.o 00:02:04.035 CC lib/util/string.o 00:02:04.035 CC lib/util/xor.o 00:02:04.035 CC lib/util/uuid.o 00:02:04.035 CC lib/util/zipf.o 00:02:04.035 CC lib/util/md5.o 00:02:04.293 CC lib/vfio_user/host/vfio_user_pci.o 00:02:04.293 CC lib/vfio_user/host/vfio_user.o 00:02:04.293 LIB libspdk_dma.a 00:02:04.293 LIB libspdk_ioat.a 00:02:04.293 LIB libspdk_vfio_user.a 00:02:04.293 LIB libspdk_util.a 00:02:04.551 LIB libspdk_trace_parser.a 00:02:04.551 CC lib/json/json_parse.o 00:02:04.551 CC lib/json/json_util.o 00:02:04.551 CC lib/json/json_write.o 00:02:04.808 CC lib/rdma_utils/rdma_utils.o 00:02:04.808 CC lib/env_dpdk/env.o 00:02:04.808 CC lib/vmd/vmd.o 00:02:04.808 CC lib/env_dpdk/memory.o 00:02:04.808 CC lib/vmd/led.o 00:02:04.808 CC lib/env_dpdk/pci.o 00:02:04.808 CC lib/env_dpdk/init.o 00:02:04.808 CC lib/env_dpdk/pci_ioat.o 00:02:04.808 CC lib/env_dpdk/threads.o 00:02:04.808 CC lib/env_dpdk/pci_virtio.o 00:02:04.808 CC lib/conf/conf.o 00:02:04.808 CC lib/env_dpdk/pci_vmd.o 00:02:04.808 CC lib/env_dpdk/pci_idxd.o 00:02:04.808 CC lib/env_dpdk/pci_event.o 00:02:04.808 CC lib/env_dpdk/sigbus_handler.o 00:02:04.808 CC lib/env_dpdk/pci_dpdk.o 00:02:04.808 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:04.808 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:04.808 CC lib/idxd/idxd.o 00:02:04.808 CC lib/idxd/idxd_user.o 00:02:04.809 CC lib/idxd/idxd_kernel.o 00:02:04.809 LIB libspdk_json.a 00:02:04.809 LIB libspdk_conf.a 00:02:04.809 LIB libspdk_rdma_utils.a 00:02:05.066 LIB libspdk_idxd.a 00:02:05.066 LIB libspdk_vmd.a 00:02:05.066 CC lib/rdma_provider/common.o 00:02:05.066 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:05.066 CC lib/jsonrpc/jsonrpc_server.o 00:02:05.066 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:05.066 CC lib/jsonrpc/jsonrpc_client.o 00:02:05.066 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:05.324 LIB libspdk_rdma_provider.a 00:02:05.324 LIB libspdk_jsonrpc.a 00:02:05.582 CC lib/rpc/rpc.o 00:02:05.582 LIB libspdk_env_dpdk.a 00:02:05.840 LIB libspdk_rpc.a 00:02:06.096 CC lib/keyring/keyring.o 00:02:06.096 CC lib/keyring/keyring_rpc.o 00:02:06.096 CC lib/trace/trace.o 00:02:06.096 CC lib/trace/trace_flags.o 00:02:06.096 CC lib/trace/trace_rpc.o 00:02:06.096 CC lib/notify/notify.o 00:02:06.096 CC lib/notify/notify_rpc.o 00:02:06.096 LIB libspdk_notify.a 00:02:06.096 LIB libspdk_keyring.a 00:02:06.355 LIB libspdk_trace.a 00:02:06.614 CC lib/thread/thread.o 00:02:06.614 CC lib/thread/iobuf.o 00:02:06.614 CC lib/sock/sock.o 00:02:06.614 CC lib/sock/sock_rpc.o 00:02:06.872 LIB libspdk_sock.a 00:02:07.130 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:07.130 CC lib/nvme/nvme_ctrlr.o 00:02:07.130 CC lib/nvme/nvme_ns_cmd.o 00:02:07.130 CC lib/nvme/nvme_fabric.o 00:02:07.130 CC lib/nvme/nvme_pcie_common.o 00:02:07.130 CC lib/nvme/nvme_ns.o 00:02:07.130 CC lib/nvme/nvme_pcie.o 00:02:07.130 CC lib/nvme/nvme_qpair.o 00:02:07.130 CC lib/nvme/nvme.o 00:02:07.130 CC lib/nvme/nvme_quirks.o 00:02:07.130 CC lib/nvme/nvme_transport.o 00:02:07.130 CC lib/nvme/nvme_discovery.o 00:02:07.130 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:07.130 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:07.130 CC lib/nvme/nvme_tcp.o 00:02:07.130 CC lib/nvme/nvme_opal.o 00:02:07.130 CC lib/nvme/nvme_io_msg.o 00:02:07.130 CC lib/nvme/nvme_poll_group.o 00:02:07.130 CC lib/nvme/nvme_zns.o 00:02:07.130 CC lib/nvme/nvme_stubs.o 00:02:07.130 CC lib/nvme/nvme_auth.o 00:02:07.130 CC lib/nvme/nvme_cuse.o 00:02:07.130 CC lib/nvme/nvme_vfio_user.o 00:02:07.130 CC lib/nvme/nvme_rdma.o 00:02:07.388 LIB libspdk_thread.a 00:02:07.647 CC lib/blob/blobstore.o 00:02:07.647 CC lib/blob/request.o 00:02:07.647 CC lib/blob/zeroes.o 00:02:07.647 CC lib/blob/blob_bs_dev.o 00:02:07.647 CC lib/fsdev/fsdev.o 00:02:07.647 CC lib/fsdev/fsdev_rpc.o 00:02:07.647 CC lib/fsdev/fsdev_io.o 00:02:07.647 CC lib/init/subsystem.o 00:02:07.647 CC lib/init/json_config.o 00:02:07.647 CC lib/accel/accel.o 00:02:07.647 CC lib/init/subsystem_rpc.o 00:02:07.647 CC lib/accel/accel_sw.o 00:02:07.647 CC lib/accel/accel_rpc.o 00:02:07.647 CC lib/init/rpc.o 00:02:07.647 CC lib/virtio/virtio_vhost_user.o 00:02:07.647 CC lib/virtio/virtio_vfio_user.o 00:02:07.647 CC lib/virtio/virtio.o 00:02:07.647 CC lib/vfu_tgt/tgt_endpoint.o 00:02:07.647 CC lib/vfu_tgt/tgt_rpc.o 00:02:07.647 CC lib/virtio/virtio_pci.o 00:02:07.647 LIB libspdk_init.a 00:02:07.905 LIB libspdk_virtio.a 00:02:07.905 LIB libspdk_vfu_tgt.a 00:02:07.905 LIB libspdk_fsdev.a 00:02:08.164 CC lib/event/reactor.o 00:02:08.164 CC lib/event/app.o 00:02:08.164 CC lib/event/scheduler_static.o 00:02:08.164 CC lib/event/log_rpc.o 00:02:08.164 CC lib/event/app_rpc.o 00:02:08.164 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:08.422 LIB libspdk_event.a 00:02:08.422 LIB libspdk_accel.a 00:02:08.422 LIB libspdk_nvme.a 00:02:08.682 LIB libspdk_fuse_dispatcher.a 00:02:08.682 CC lib/bdev/bdev.o 00:02:08.682 CC lib/bdev/bdev_rpc.o 00:02:08.682 CC lib/bdev/bdev_zone.o 00:02:08.682 CC lib/bdev/part.o 00:02:08.682 CC lib/bdev/scsi_nvme.o 00:02:09.249 LIB libspdk_blob.a 00:02:09.508 CC lib/blobfs/blobfs.o 00:02:09.508 CC lib/blobfs/tree.o 00:02:09.508 CC lib/lvol/lvol.o 00:02:10.074 LIB libspdk_lvol.a 00:02:10.075 LIB libspdk_blobfs.a 00:02:10.333 LIB libspdk_bdev.a 00:02:10.901 CC lib/nbd/nbd.o 00:02:10.901 CC lib/nbd/nbd_rpc.o 00:02:10.901 CC lib/scsi/lun.o 00:02:10.901 CC lib/scsi/dev.o 00:02:10.901 CC lib/scsi/scsi.o 00:02:10.901 CC lib/scsi/port.o 00:02:10.901 CC lib/scsi/scsi_bdev.o 00:02:10.901 CC lib/scsi/scsi_pr.o 00:02:10.901 CC lib/ublk/ublk.o 00:02:10.901 CC lib/scsi/scsi_rpc.o 00:02:10.901 CC lib/scsi/task.o 00:02:10.901 CC lib/ublk/ublk_rpc.o 00:02:10.901 CC lib/nvmf/ctrlr.o 00:02:10.901 CC lib/nvmf/ctrlr_bdev.o 00:02:10.901 CC lib/nvmf/ctrlr_discovery.o 00:02:10.901 CC lib/nvmf/nvmf.o 00:02:10.901 CC lib/nvmf/nvmf_rpc.o 00:02:10.901 CC lib/nvmf/subsystem.o 00:02:10.901 CC lib/nvmf/transport.o 00:02:10.901 CC lib/nvmf/tcp.o 00:02:10.901 CC lib/nvmf/stubs.o 00:02:10.901 CC lib/nvmf/mdns_server.o 00:02:10.901 CC lib/nvmf/vfio_user.o 00:02:10.901 CC lib/nvmf/rdma.o 00:02:10.901 CC lib/ftl/ftl_core.o 00:02:10.901 CC lib/ftl/ftl_init.o 00:02:10.901 CC lib/ftl/ftl_layout.o 00:02:10.901 CC lib/nvmf/auth.o 00:02:10.901 CC lib/ftl/ftl_debug.o 00:02:10.901 CC lib/ftl/ftl_io.o 00:02:10.901 CC lib/ftl/ftl_sb.o 00:02:10.901 CC lib/ftl/ftl_l2p.o 00:02:10.901 CC lib/ftl/ftl_l2p_flat.o 00:02:10.901 CC lib/ftl/ftl_nv_cache.o 00:02:10.901 CC lib/ftl/ftl_band_ops.o 00:02:10.901 CC lib/ftl/ftl_band.o 00:02:10.901 CC lib/ftl/ftl_writer.o 00:02:10.901 CC lib/ftl/ftl_reloc.o 00:02:10.901 CC lib/ftl/ftl_rq.o 00:02:10.901 CC lib/ftl/ftl_l2p_cache.o 00:02:10.901 CC lib/ftl/ftl_p2l.o 00:02:10.901 CC lib/ftl/ftl_p2l_log.o 00:02:10.901 CC lib/ftl/mngt/ftl_mngt.o 00:02:10.901 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:10.901 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:10.901 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:10.901 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:10.901 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:10.901 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:10.901 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:10.901 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:10.901 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:10.901 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:10.901 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:10.901 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:10.901 CC lib/ftl/utils/ftl_conf.o 00:02:10.901 CC lib/ftl/utils/ftl_md.o 00:02:10.901 CC lib/ftl/utils/ftl_mempool.o 00:02:10.901 CC lib/ftl/utils/ftl_bitmap.o 00:02:10.901 CC lib/ftl/utils/ftl_property.o 00:02:10.901 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:10.901 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:10.901 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:10.901 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:10.901 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:10.901 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:10.901 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:10.901 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:10.901 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:10.901 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:10.901 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:10.901 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:10.901 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:10.901 CC lib/ftl/base/ftl_base_dev.o 00:02:10.901 CC lib/ftl/base/ftl_base_bdev.o 00:02:10.901 CC lib/ftl/ftl_trace.o 00:02:11.159 LIB libspdk_nbd.a 00:02:11.160 LIB libspdk_scsi.a 00:02:11.418 LIB libspdk_ublk.a 00:02:11.418 CC lib/iscsi/iscsi.o 00:02:11.418 CC lib/iscsi/conn.o 00:02:11.418 CC lib/iscsi/init_grp.o 00:02:11.418 CC lib/iscsi/param.o 00:02:11.418 CC lib/iscsi/portal_grp.o 00:02:11.418 CC lib/iscsi/tgt_node.o 00:02:11.418 CC lib/iscsi/task.o 00:02:11.418 CC lib/iscsi/iscsi_subsystem.o 00:02:11.418 CC lib/iscsi/iscsi_rpc.o 00:02:11.418 CC lib/vhost/vhost_rpc.o 00:02:11.418 CC lib/vhost/vhost_scsi.o 00:02:11.418 CC lib/vhost/vhost.o 00:02:11.418 CC lib/vhost/vhost_blk.o 00:02:11.418 CC lib/vhost/rte_vhost_user.o 00:02:11.418 LIB libspdk_ftl.a 00:02:11.985 LIB libspdk_nvmf.a 00:02:12.244 LIB libspdk_vhost.a 00:02:12.244 LIB libspdk_iscsi.a 00:02:12.810 CC module/env_dpdk/env_dpdk_rpc.o 00:02:12.810 CC module/vfu_device/vfu_virtio.o 00:02:12.810 CC module/vfu_device/vfu_virtio_scsi.o 00:02:12.810 CC module/vfu_device/vfu_virtio_blk.o 00:02:12.810 CC module/vfu_device/vfu_virtio_rpc.o 00:02:12.810 CC module/vfu_device/vfu_virtio_fs.o 00:02:12.810 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:12.810 CC module/fsdev/aio/fsdev_aio.o 00:02:12.810 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:12.810 CC module/fsdev/aio/linux_aio_mgr.o 00:02:12.810 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:12.810 CC module/keyring/linux/keyring.o 00:02:12.810 CC module/keyring/linux/keyring_rpc.o 00:02:12.810 CC module/accel/ioat/accel_ioat_rpc.o 00:02:12.810 CC module/accel/ioat/accel_ioat.o 00:02:12.810 CC module/blob/bdev/blob_bdev.o 00:02:12.810 CC module/scheduler/gscheduler/gscheduler.o 00:02:12.810 CC module/keyring/file/keyring_rpc.o 00:02:12.810 CC module/keyring/file/keyring.o 00:02:12.810 LIB libspdk_env_dpdk_rpc.a 00:02:12.810 CC module/accel/dsa/accel_dsa.o 00:02:12.810 CC module/accel/error/accel_error.o 00:02:12.810 CC module/accel/dsa/accel_dsa_rpc.o 00:02:12.810 CC module/accel/error/accel_error_rpc.o 00:02:12.810 CC module/accel/iaa/accel_iaa.o 00:02:12.810 CC module/accel/iaa/accel_iaa_rpc.o 00:02:12.810 CC module/sock/posix/posix.o 00:02:13.069 LIB libspdk_scheduler_dpdk_governor.a 00:02:13.069 LIB libspdk_keyring_linux.a 00:02:13.069 LIB libspdk_scheduler_gscheduler.a 00:02:13.069 LIB libspdk_keyring_file.a 00:02:13.069 LIB libspdk_scheduler_dynamic.a 00:02:13.069 LIB libspdk_accel_ioat.a 00:02:13.069 LIB libspdk_accel_error.a 00:02:13.069 LIB libspdk_accel_iaa.a 00:02:13.069 LIB libspdk_blob_bdev.a 00:02:13.069 LIB libspdk_accel_dsa.a 00:02:13.069 LIB libspdk_vfu_device.a 00:02:13.327 LIB libspdk_fsdev_aio.a 00:02:13.327 LIB libspdk_sock_posix.a 00:02:13.327 CC module/bdev/gpt/gpt.o 00:02:13.327 CC module/bdev/gpt/vbdev_gpt.o 00:02:13.327 CC module/blobfs/bdev/blobfs_bdev.o 00:02:13.327 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:13.327 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:13.327 CC module/bdev/delay/vbdev_delay.o 00:02:13.327 CC module/bdev/error/vbdev_error.o 00:02:13.327 CC module/bdev/aio/bdev_aio.o 00:02:13.327 CC module/bdev/aio/bdev_aio_rpc.o 00:02:13.327 CC module/bdev/error/vbdev_error_rpc.o 00:02:13.327 CC module/bdev/ftl/bdev_ftl.o 00:02:13.327 CC module/bdev/null/bdev_null_rpc.o 00:02:13.327 CC module/bdev/null/bdev_null.o 00:02:13.327 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:13.327 CC module/bdev/nvme/bdev_nvme.o 00:02:13.327 CC module/bdev/nvme/bdev_mdns_client.o 00:02:13.327 CC module/bdev/passthru/vbdev_passthru.o 00:02:13.327 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:13.327 CC module/bdev/nvme/nvme_rpc.o 00:02:13.327 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:13.327 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:13.327 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:13.327 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:13.327 CC module/bdev/nvme/vbdev_opal.o 00:02:13.327 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:13.327 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:13.327 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:13.327 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:13.327 CC module/bdev/malloc/bdev_malloc.o 00:02:13.327 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:13.327 CC module/bdev/split/vbdev_split.o 00:02:13.327 CC module/bdev/split/vbdev_split_rpc.o 00:02:13.327 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:13.327 CC module/bdev/lvol/vbdev_lvol.o 00:02:13.327 CC module/bdev/raid/bdev_raid_rpc.o 00:02:13.327 CC module/bdev/raid/bdev_raid.o 00:02:13.327 CC module/bdev/raid/bdev_raid_sb.o 00:02:13.327 CC module/bdev/raid/raid0.o 00:02:13.327 CC module/bdev/raid/raid1.o 00:02:13.327 CC module/bdev/raid/concat.o 00:02:13.327 CC module/bdev/iscsi/bdev_iscsi.o 00:02:13.327 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:13.595 LIB libspdk_blobfs_bdev.a 00:02:13.595 LIB libspdk_bdev_gpt.a 00:02:13.595 LIB libspdk_bdev_error.a 00:02:13.595 LIB libspdk_bdev_null.a 00:02:13.595 LIB libspdk_bdev_split.a 00:02:13.595 LIB libspdk_bdev_passthru.a 00:02:13.595 LIB libspdk_bdev_zone_block.a 00:02:13.595 LIB libspdk_bdev_ftl.a 00:02:13.595 LIB libspdk_bdev_delay.a 00:02:13.595 LIB libspdk_bdev_malloc.a 00:02:13.857 LIB libspdk_bdev_aio.a 00:02:13.857 LIB libspdk_bdev_iscsi.a 00:02:13.857 LIB libspdk_bdev_lvol.a 00:02:13.857 LIB libspdk_bdev_virtio.a 00:02:14.115 LIB libspdk_bdev_raid.a 00:02:15.050 LIB libspdk_bdev_nvme.a 00:02:15.313 CC module/event/subsystems/scheduler/scheduler.o 00:02:15.571 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:15.571 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:15.571 CC module/event/subsystems/iobuf/iobuf.o 00:02:15.571 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:15.571 CC module/event/subsystems/vmd/vmd.o 00:02:15.571 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:15.571 CC module/event/subsystems/sock/sock.o 00:02:15.571 CC module/event/subsystems/keyring/keyring.o 00:02:15.571 CC module/event/subsystems/fsdev/fsdev.o 00:02:15.571 LIB libspdk_event_vfu_tgt.a 00:02:15.571 LIB libspdk_event_scheduler.a 00:02:15.571 LIB libspdk_event_keyring.a 00:02:15.571 LIB libspdk_event_iobuf.a 00:02:15.571 LIB libspdk_event_vhost_blk.a 00:02:15.571 LIB libspdk_event_vmd.a 00:02:15.571 LIB libspdk_event_sock.a 00:02:15.571 LIB libspdk_event_fsdev.a 00:02:15.829 CC module/event/subsystems/accel/accel.o 00:02:16.087 LIB libspdk_event_accel.a 00:02:16.345 CC module/event/subsystems/bdev/bdev.o 00:02:16.345 LIB libspdk_event_bdev.a 00:02:16.603 CC module/event/subsystems/ublk/ublk.o 00:02:16.603 CC module/event/subsystems/nbd/nbd.o 00:02:16.603 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:16.603 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:16.603 CC module/event/subsystems/scsi/scsi.o 00:02:16.861 LIB libspdk_event_ublk.a 00:02:16.861 LIB libspdk_event_nbd.a 00:02:16.861 LIB libspdk_event_scsi.a 00:02:16.861 LIB libspdk_event_nvmf.a 00:02:17.119 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:17.119 CC module/event/subsystems/iscsi/iscsi.o 00:02:17.119 LIB libspdk_event_vhost_scsi.a 00:02:17.119 LIB libspdk_event_iscsi.a 00:02:17.689 CC app/spdk_lspci/spdk_lspci.o 00:02:17.689 CC app/trace_record/trace_record.o 00:02:17.689 CC test/rpc_client/rpc_client_test.o 00:02:17.689 CXX app/trace/trace.o 00:02:17.689 CC app/spdk_nvme_perf/perf.o 00:02:17.689 CC app/spdk_nvme_identify/identify.o 00:02:17.689 TEST_HEADER include/spdk/accel.h 00:02:17.689 TEST_HEADER include/spdk/accel_module.h 00:02:17.689 TEST_HEADER include/spdk/barrier.h 00:02:17.689 TEST_HEADER include/spdk/assert.h 00:02:17.689 TEST_HEADER include/spdk/base64.h 00:02:17.689 TEST_HEADER include/spdk/bdev.h 00:02:17.689 TEST_HEADER include/spdk/bdev_module.h 00:02:17.689 TEST_HEADER include/spdk/bdev_zone.h 00:02:17.689 CC app/spdk_top/spdk_top.o 00:02:17.689 TEST_HEADER include/spdk/blob_bdev.h 00:02:17.689 TEST_HEADER include/spdk/bit_pool.h 00:02:17.689 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:17.689 TEST_HEADER include/spdk/bit_array.h 00:02:17.689 TEST_HEADER include/spdk/blobfs.h 00:02:17.689 CC app/spdk_nvme_discover/discovery_aer.o 00:02:17.689 TEST_HEADER include/spdk/conf.h 00:02:17.689 TEST_HEADER include/spdk/config.h 00:02:17.689 TEST_HEADER include/spdk/cpuset.h 00:02:17.689 TEST_HEADER include/spdk/blob.h 00:02:17.689 TEST_HEADER include/spdk/crc32.h 00:02:17.689 TEST_HEADER include/spdk/crc16.h 00:02:17.689 TEST_HEADER include/spdk/endian.h 00:02:17.689 TEST_HEADER include/spdk/dif.h 00:02:17.689 TEST_HEADER include/spdk/crc64.h 00:02:17.689 TEST_HEADER include/spdk/dma.h 00:02:17.689 TEST_HEADER include/spdk/env.h 00:02:17.689 TEST_HEADER include/spdk/env_dpdk.h 00:02:17.689 TEST_HEADER include/spdk/event.h 00:02:17.689 TEST_HEADER include/spdk/fd_group.h 00:02:17.689 TEST_HEADER include/spdk/file.h 00:02:17.689 TEST_HEADER include/spdk/fd.h 00:02:17.689 TEST_HEADER include/spdk/fsdev.h 00:02:17.689 TEST_HEADER include/spdk/fsdev_module.h 00:02:17.689 TEST_HEADER include/spdk/ftl.h 00:02:17.689 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:17.689 TEST_HEADER include/spdk/gpt_spec.h 00:02:17.689 TEST_HEADER include/spdk/hexlify.h 00:02:17.689 TEST_HEADER include/spdk/histogram_data.h 00:02:17.689 TEST_HEADER include/spdk/idxd_spec.h 00:02:17.689 TEST_HEADER include/spdk/idxd.h 00:02:17.689 TEST_HEADER include/spdk/init.h 00:02:17.689 TEST_HEADER include/spdk/ioat.h 00:02:17.689 TEST_HEADER include/spdk/iscsi_spec.h 00:02:17.689 TEST_HEADER include/spdk/json.h 00:02:17.689 TEST_HEADER include/spdk/ioat_spec.h 00:02:17.689 TEST_HEADER include/spdk/keyring.h 00:02:17.689 TEST_HEADER include/spdk/jsonrpc.h 00:02:17.689 TEST_HEADER include/spdk/keyring_module.h 00:02:17.689 TEST_HEADER include/spdk/likely.h 00:02:17.689 TEST_HEADER include/spdk/log.h 00:02:17.689 TEST_HEADER include/spdk/lvol.h 00:02:17.689 TEST_HEADER include/spdk/memory.h 00:02:17.689 TEST_HEADER include/spdk/md5.h 00:02:17.689 TEST_HEADER include/spdk/mmio.h 00:02:17.689 TEST_HEADER include/spdk/nbd.h 00:02:17.689 TEST_HEADER include/spdk/net.h 00:02:17.689 TEST_HEADER include/spdk/notify.h 00:02:17.689 TEST_HEADER include/spdk/nvme.h 00:02:17.689 CC app/iscsi_tgt/iscsi_tgt.o 00:02:17.689 TEST_HEADER include/spdk/nvme_intel.h 00:02:17.689 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:17.689 TEST_HEADER include/spdk/nvme_spec.h 00:02:17.689 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:17.689 TEST_HEADER include/spdk/nvme_zns.h 00:02:17.689 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:17.689 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:17.689 TEST_HEADER include/spdk/nvmf.h 00:02:17.689 TEST_HEADER include/spdk/nvmf_transport.h 00:02:17.689 TEST_HEADER include/spdk/opal.h 00:02:17.689 TEST_HEADER include/spdk/nvmf_spec.h 00:02:17.689 TEST_HEADER include/spdk/opal_spec.h 00:02:17.689 TEST_HEADER include/spdk/pci_ids.h 00:02:17.689 TEST_HEADER include/spdk/pipe.h 00:02:17.689 TEST_HEADER include/spdk/queue.h 00:02:17.689 TEST_HEADER include/spdk/reduce.h 00:02:17.689 TEST_HEADER include/spdk/rpc.h 00:02:17.689 TEST_HEADER include/spdk/scheduler.h 00:02:17.689 TEST_HEADER include/spdk/scsi_spec.h 00:02:17.689 TEST_HEADER include/spdk/scsi.h 00:02:17.689 TEST_HEADER include/spdk/sock.h 00:02:17.689 TEST_HEADER include/spdk/stdinc.h 00:02:17.689 TEST_HEADER include/spdk/string.h 00:02:17.689 TEST_HEADER include/spdk/thread.h 00:02:17.689 TEST_HEADER include/spdk/trace_parser.h 00:02:17.689 TEST_HEADER include/spdk/trace.h 00:02:17.689 TEST_HEADER include/spdk/tree.h 00:02:17.689 TEST_HEADER include/spdk/ublk.h 00:02:17.689 TEST_HEADER include/spdk/util.h 00:02:17.689 TEST_HEADER include/spdk/uuid.h 00:02:17.689 TEST_HEADER include/spdk/version.h 00:02:17.689 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:17.689 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:17.689 TEST_HEADER include/spdk/vhost.h 00:02:17.689 TEST_HEADER include/spdk/vmd.h 00:02:17.689 TEST_HEADER include/spdk/xor.h 00:02:17.689 TEST_HEADER include/spdk/zipf.h 00:02:17.689 CXX test/cpp_headers/accel.o 00:02:17.689 CXX test/cpp_headers/accel_module.o 00:02:17.689 CXX test/cpp_headers/assert.o 00:02:17.689 CC app/spdk_tgt/spdk_tgt.o 00:02:17.689 CXX test/cpp_headers/barrier.o 00:02:17.689 CXX test/cpp_headers/base64.o 00:02:17.689 CXX test/cpp_headers/bdev.o 00:02:17.689 CXX test/cpp_headers/bdev_module.o 00:02:17.689 CXX test/cpp_headers/bit_array.o 00:02:17.689 CXX test/cpp_headers/bdev_zone.o 00:02:17.689 CXX test/cpp_headers/bit_pool.o 00:02:17.689 CXX test/cpp_headers/blobfs_bdev.o 00:02:17.689 CXX test/cpp_headers/blob_bdev.o 00:02:17.689 CXX test/cpp_headers/blobfs.o 00:02:17.689 CXX test/cpp_headers/blob.o 00:02:17.689 CXX test/cpp_headers/config.o 00:02:17.689 CXX test/cpp_headers/conf.o 00:02:17.689 CXX test/cpp_headers/crc16.o 00:02:17.689 CXX test/cpp_headers/cpuset.o 00:02:17.689 CXX test/cpp_headers/crc32.o 00:02:17.689 CC test/app/jsoncat/jsoncat.o 00:02:17.689 CXX test/cpp_headers/crc64.o 00:02:17.689 CXX test/cpp_headers/dif.o 00:02:17.689 CXX test/cpp_headers/dma.o 00:02:17.689 CXX test/cpp_headers/endian.o 00:02:17.689 CXX test/cpp_headers/env.o 00:02:17.689 CXX test/cpp_headers/env_dpdk.o 00:02:17.689 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:17.690 CXX test/cpp_headers/event.o 00:02:17.690 CXX test/cpp_headers/fd.o 00:02:17.690 CXX test/cpp_headers/fd_group.o 00:02:17.690 CXX test/cpp_headers/file.o 00:02:17.690 CXX test/cpp_headers/fsdev.o 00:02:17.690 CC test/app/histogram_perf/histogram_perf.o 00:02:17.690 CXX test/cpp_headers/fsdev_module.o 00:02:17.690 CXX test/cpp_headers/ftl.o 00:02:17.690 CXX test/cpp_headers/fuse_dispatcher.o 00:02:17.690 CXX test/cpp_headers/gpt_spec.o 00:02:17.690 CXX test/cpp_headers/hexlify.o 00:02:17.690 CC examples/util/zipf/zipf.o 00:02:17.690 CXX test/cpp_headers/histogram_data.o 00:02:17.690 CXX test/cpp_headers/idxd.o 00:02:17.690 CC examples/ioat/perf/perf.o 00:02:17.690 CXX test/cpp_headers/idxd_spec.o 00:02:17.690 CXX test/cpp_headers/init.o 00:02:17.690 CXX test/cpp_headers/ioat.o 00:02:17.690 CXX test/cpp_headers/ioat_spec.o 00:02:17.690 CC examples/ioat/verify/verify.o 00:02:17.690 CC app/spdk_dd/spdk_dd.o 00:02:17.690 CC test/app/stub/stub.o 00:02:17.690 CC test/thread/poller_perf/poller_perf.o 00:02:17.690 CC test/thread/lock/spdk_lock.o 00:02:17.690 CC test/env/pci/pci_ut.o 00:02:17.690 CC test/env/memory/memory_ut.o 00:02:17.690 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:17.690 CC app/fio/nvme/fio_plugin.o 00:02:17.690 CC app/nvmf_tgt/nvmf_main.o 00:02:17.690 CXX test/cpp_headers/iscsi_spec.o 00:02:17.690 CC test/env/vtophys/vtophys.o 00:02:17.690 LINK spdk_lspci 00:02:17.690 CC test/dma/test_dma/test_dma.o 00:02:17.690 CC test/app/bdev_svc/bdev_svc.o 00:02:17.690 CC app/fio/bdev/fio_plugin.o 00:02:17.690 LINK rpc_client_test 00:02:17.690 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:17.690 CC test/env/mem_callbacks/mem_callbacks.o 00:02:17.690 LINK spdk_nvme_discover 00:02:17.690 LINK histogram_perf 00:02:17.690 LINK jsoncat 00:02:17.690 LINK spdk_trace_record 00:02:17.690 CXX test/cpp_headers/json.o 00:02:17.690 CXX test/cpp_headers/jsonrpc.o 00:02:17.690 CXX test/cpp_headers/keyring.o 00:02:17.952 CXX test/cpp_headers/keyring_module.o 00:02:17.952 LINK zipf 00:02:17.952 CXX test/cpp_headers/likely.o 00:02:17.952 CXX test/cpp_headers/log.o 00:02:17.952 CXX test/cpp_headers/md5.o 00:02:17.952 CXX test/cpp_headers/lvol.o 00:02:17.952 CXX test/cpp_headers/memory.o 00:02:17.952 CXX test/cpp_headers/mmio.o 00:02:17.952 LINK poller_perf 00:02:17.952 CXX test/cpp_headers/nbd.o 00:02:17.952 CXX test/cpp_headers/net.o 00:02:17.952 CXX test/cpp_headers/notify.o 00:02:17.952 CXX test/cpp_headers/nvme.o 00:02:17.952 CXX test/cpp_headers/nvme_intel.o 00:02:17.952 CXX test/cpp_headers/nvme_ocssd.o 00:02:17.952 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:17.952 CXX test/cpp_headers/nvme_spec.o 00:02:17.952 CXX test/cpp_headers/nvme_zns.o 00:02:17.952 CXX test/cpp_headers/nvmf_cmd.o 00:02:17.952 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:17.952 CXX test/cpp_headers/nvmf.o 00:02:17.952 CXX test/cpp_headers/nvmf_spec.o 00:02:17.952 CXX test/cpp_headers/nvmf_transport.o 00:02:17.952 CXX test/cpp_headers/opal.o 00:02:17.952 CXX test/cpp_headers/opal_spec.o 00:02:17.952 CXX test/cpp_headers/pci_ids.o 00:02:17.952 CXX test/cpp_headers/pipe.o 00:02:17.952 CXX test/cpp_headers/queue.o 00:02:17.952 CXX test/cpp_headers/reduce.o 00:02:17.952 CXX test/cpp_headers/rpc.o 00:02:17.952 LINK env_dpdk_post_init 00:02:17.952 CXX test/cpp_headers/scheduler.o 00:02:17.952 CXX test/cpp_headers/scsi.o 00:02:17.952 CXX test/cpp_headers/scsi_spec.o 00:02:17.952 LINK vtophys 00:02:17.952 CXX test/cpp_headers/sock.o 00:02:17.952 LINK iscsi_tgt 00:02:17.952 LINK interrupt_tgt 00:02:17.952 CXX test/cpp_headers/stdinc.o 00:02:17.952 LINK stub 00:02:17.952 CXX test/cpp_headers/string.o 00:02:17.952 CXX test/cpp_headers/thread.o 00:02:17.952 CXX test/cpp_headers/trace.o 00:02:17.952 LINK verify 00:02:17.952 LINK ioat_perf 00:02:17.952 LINK nvmf_tgt 00:02:17.952 LINK spdk_tgt 00:02:17.952 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:17.952 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:17.952 LINK bdev_svc 00:02:17.952 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:17.952 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:17.952 LINK spdk_trace 00:02:17.952 CXX test/cpp_headers/trace_parser.o 00:02:17.952 CXX test/cpp_headers/tree.o 00:02:17.952 CXX test/cpp_headers/ublk.o 00:02:17.952 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:17.952 CXX test/cpp_headers/util.o 00:02:17.952 CXX test/cpp_headers/uuid.o 00:02:17.952 CXX test/cpp_headers/version.o 00:02:17.952 CXX test/cpp_headers/vfio_user_pci.o 00:02:17.952 CXX test/cpp_headers/vfio_user_spec.o 00:02:17.952 CXX test/cpp_headers/vhost.o 00:02:17.952 CXX test/cpp_headers/vmd.o 00:02:17.952 CXX test/cpp_headers/xor.o 00:02:17.952 CXX test/cpp_headers/zipf.o 00:02:18.210 LINK pci_ut 00:02:18.210 LINK nvme_fuzz 00:02:18.210 LINK spdk_dd 00:02:18.210 LINK test_dma 00:02:18.210 LINK spdk_bdev 00:02:18.210 LINK llvm_vfio_fuzz 00:02:18.210 LINK mem_callbacks 00:02:18.210 LINK spdk_nvme_identify 00:02:18.467 LINK spdk_nvme 00:02:18.467 LINK vhost_fuzz 00:02:18.467 LINK spdk_nvme_perf 00:02:18.467 LINK spdk_top 00:02:18.467 CC examples/vmd/lsvmd/lsvmd.o 00:02:18.467 CC examples/sock/hello_world/hello_sock.o 00:02:18.468 LINK llvm_nvme_fuzz 00:02:18.468 CC examples/idxd/perf/perf.o 00:02:18.468 CC examples/vmd/led/led.o 00:02:18.468 CC examples/thread/thread/thread_ex.o 00:02:18.468 CC app/vhost/vhost.o 00:02:18.725 LINK lsvmd 00:02:18.725 LINK led 00:02:18.725 LINK memory_ut 00:02:18.725 LINK hello_sock 00:02:18.725 LINK vhost 00:02:18.725 LINK thread 00:02:18.725 LINK idxd_perf 00:02:18.982 LINK spdk_lock 00:02:19.239 LINK iscsi_fuzz 00:02:19.496 CC examples/nvme/hotplug/hotplug.o 00:02:19.496 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:19.496 CC examples/nvme/arbitration/arbitration.o 00:02:19.496 CC examples/nvme/abort/abort.o 00:02:19.496 CC examples/nvme/hello_world/hello_world.o 00:02:19.496 CC examples/nvme/reconnect/reconnect.o 00:02:19.496 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:19.496 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:19.496 CC test/event/reactor_perf/reactor_perf.o 00:02:19.496 CC test/event/event_perf/event_perf.o 00:02:19.496 CC test/event/reactor/reactor.o 00:02:19.496 CC test/event/app_repeat/app_repeat.o 00:02:19.496 CC test/event/scheduler/scheduler.o 00:02:19.497 LINK cmb_copy 00:02:19.497 LINK pmr_persistence 00:02:19.497 LINK hotplug 00:02:19.497 LINK hello_world 00:02:19.753 LINK reactor 00:02:19.753 LINK reactor_perf 00:02:19.753 LINK event_perf 00:02:19.753 LINK app_repeat 00:02:19.753 LINK abort 00:02:19.753 LINK arbitration 00:02:19.753 LINK reconnect 00:02:19.753 LINK nvme_manage 00:02:19.753 LINK scheduler 00:02:20.009 CC test/nvme/simple_copy/simple_copy.o 00:02:20.009 CC test/nvme/aer/aer.o 00:02:20.009 CC test/nvme/reset/reset.o 00:02:20.009 CC test/nvme/startup/startup.o 00:02:20.009 CC test/nvme/boot_partition/boot_partition.o 00:02:20.009 CC test/nvme/sgl/sgl.o 00:02:20.009 CC test/nvme/err_injection/err_injection.o 00:02:20.009 CC test/nvme/overhead/overhead.o 00:02:20.009 CC test/nvme/reserve/reserve.o 00:02:20.009 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:20.009 CC test/nvme/fused_ordering/fused_ordering.o 00:02:20.009 CC test/nvme/compliance/nvme_compliance.o 00:02:20.009 CC test/nvme/fdp/fdp.o 00:02:20.009 CC test/nvme/e2edp/nvme_dp.o 00:02:20.009 CC test/nvme/cuse/cuse.o 00:02:20.009 CC test/nvme/connect_stress/connect_stress.o 00:02:20.009 CC test/blobfs/mkfs/mkfs.o 00:02:20.009 CC test/accel/dif/dif.o 00:02:20.009 CC test/lvol/esnap/esnap.o 00:02:20.009 LINK startup 00:02:20.010 LINK err_injection 00:02:20.010 LINK boot_partition 00:02:20.267 LINK doorbell_aers 00:02:20.267 LINK connect_stress 00:02:20.267 LINK reserve 00:02:20.267 LINK fused_ordering 00:02:20.267 LINK simple_copy 00:02:20.267 LINK aer 00:02:20.267 LINK sgl 00:02:20.267 LINK nvme_dp 00:02:20.267 LINK overhead 00:02:20.267 LINK mkfs 00:02:20.267 LINK reset 00:02:20.267 LINK nvme_compliance 00:02:20.267 LINK fdp 00:02:20.524 CC examples/accel/perf/accel_perf.o 00:02:20.524 CC examples/blob/hello_world/hello_blob.o 00:02:20.524 CC examples/blob/cli/blobcli.o 00:02:20.524 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:20.524 LINK dif 00:02:20.782 LINK hello_blob 00:02:20.782 LINK hello_fsdev 00:02:20.782 LINK accel_perf 00:02:20.782 LINK blobcli 00:02:21.040 LINK cuse 00:02:21.606 CC examples/bdev/hello_world/hello_bdev.o 00:02:21.606 CC examples/bdev/bdevperf/bdevperf.o 00:02:21.606 LINK hello_bdev 00:02:22.212 LINK bdevperf 00:02:22.212 CC test/bdev/bdevio/bdevio.o 00:02:22.560 LINK bdevio 00:02:23.498 CC examples/nvmf/nvmf/nvmf.o 00:02:23.757 LINK nvmf 00:02:23.757 LINK esnap 00:02:25.132 00:02:25.132 real 0m45.573s 00:02:25.132 user 6m53.439s 00:02:25.132 sys 2m17.438s 00:02:25.132 12:24:05 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:25.132 12:24:05 make -- common/autotest_common.sh@10 -- $ set +x 00:02:25.132 ************************************ 00:02:25.132 END TEST make 00:02:25.132 ************************************ 00:02:25.132 12:24:05 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:25.132 12:24:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:25.132 12:24:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:25.132 12:24:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.132 12:24:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:25.132 12:24:05 -- pm/common@44 -- $ pid=545613 00:02:25.132 12:24:05 -- pm/common@50 -- $ kill -TERM 545613 00:02:25.132 12:24:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.132 12:24:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:25.132 12:24:05 -- pm/common@44 -- $ pid=545615 00:02:25.132 12:24:05 -- pm/common@50 -- $ kill -TERM 545615 00:02:25.132 12:24:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.132 12:24:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:25.132 12:24:05 -- pm/common@44 -- $ pid=545617 00:02:25.132 12:24:05 -- pm/common@50 -- $ kill -TERM 545617 00:02:25.132 12:24:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.132 12:24:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:25.132 12:24:05 -- pm/common@44 -- $ pid=545641 00:02:25.132 12:24:05 -- pm/common@50 -- $ sudo -E kill -TERM 545641 00:02:25.132 12:24:05 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:25.132 12:24:05 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:02:25.391 12:24:05 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:25.391 12:24:05 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:25.391 12:24:05 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:25.391 12:24:05 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:25.391 12:24:05 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:25.391 12:24:05 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:25.391 12:24:05 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:25.391 12:24:05 -- scripts/common.sh@336 -- # IFS=.-: 00:02:25.391 12:24:05 -- scripts/common.sh@336 -- # read -ra ver1 00:02:25.391 12:24:05 -- scripts/common.sh@337 -- # IFS=.-: 00:02:25.391 12:24:05 -- scripts/common.sh@337 -- # read -ra ver2 00:02:25.391 12:24:05 -- scripts/common.sh@338 -- # local 'op=<' 00:02:25.391 12:24:05 -- scripts/common.sh@340 -- # ver1_l=2 00:02:25.391 12:24:05 -- scripts/common.sh@341 -- # ver2_l=1 00:02:25.391 12:24:05 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:25.391 12:24:05 -- scripts/common.sh@344 -- # case "$op" in 00:02:25.391 12:24:05 -- scripts/common.sh@345 -- # : 1 00:02:25.391 12:24:05 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:25.391 12:24:05 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:25.391 12:24:05 -- scripts/common.sh@365 -- # decimal 1 00:02:25.391 12:24:05 -- scripts/common.sh@353 -- # local d=1 00:02:25.391 12:24:05 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:25.391 12:24:05 -- scripts/common.sh@355 -- # echo 1 00:02:25.391 12:24:05 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:25.391 12:24:05 -- scripts/common.sh@366 -- # decimal 2 00:02:25.391 12:24:05 -- scripts/common.sh@353 -- # local d=2 00:02:25.391 12:24:05 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:25.391 12:24:05 -- scripts/common.sh@355 -- # echo 2 00:02:25.391 12:24:05 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:25.391 12:24:05 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:25.391 12:24:05 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:25.391 12:24:05 -- scripts/common.sh@368 -- # return 0 00:02:25.391 12:24:05 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:25.391 12:24:05 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:25.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:25.391 --rc genhtml_branch_coverage=1 00:02:25.391 --rc genhtml_function_coverage=1 00:02:25.391 --rc genhtml_legend=1 00:02:25.391 --rc geninfo_all_blocks=1 00:02:25.391 --rc geninfo_unexecuted_blocks=1 00:02:25.391 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:25.391 ' 00:02:25.391 12:24:05 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:25.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:25.391 --rc genhtml_branch_coverage=1 00:02:25.391 --rc genhtml_function_coverage=1 00:02:25.391 --rc genhtml_legend=1 00:02:25.391 --rc geninfo_all_blocks=1 00:02:25.391 --rc geninfo_unexecuted_blocks=1 00:02:25.391 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:25.391 ' 00:02:25.391 12:24:05 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:25.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:25.391 --rc genhtml_branch_coverage=1 00:02:25.391 --rc genhtml_function_coverage=1 00:02:25.391 --rc genhtml_legend=1 00:02:25.391 --rc geninfo_all_blocks=1 00:02:25.391 --rc geninfo_unexecuted_blocks=1 00:02:25.391 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:25.391 ' 00:02:25.391 12:24:05 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:25.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:25.391 --rc genhtml_branch_coverage=1 00:02:25.391 --rc genhtml_function_coverage=1 00:02:25.391 --rc genhtml_legend=1 00:02:25.391 --rc geninfo_all_blocks=1 00:02:25.391 --rc geninfo_unexecuted_blocks=1 00:02:25.391 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:25.391 ' 00:02:25.391 12:24:05 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:25.391 12:24:05 -- nvmf/common.sh@7 -- # uname -s 00:02:25.391 12:24:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:25.391 12:24:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:25.391 12:24:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:25.391 12:24:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:25.391 12:24:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:25.391 12:24:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:25.391 12:24:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:25.391 12:24:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:25.391 12:24:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:25.391 12:24:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:25.391 12:24:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:02:25.391 12:24:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:02:25.391 12:24:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:25.391 12:24:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:25.391 12:24:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:25.391 12:24:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:25.391 12:24:05 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:25.391 12:24:05 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:25.391 12:24:05 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:25.391 12:24:05 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:25.391 12:24:05 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:25.391 12:24:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.391 12:24:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.391 12:24:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.391 12:24:05 -- paths/export.sh@5 -- # export PATH 00:02:25.391 12:24:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.391 12:24:05 -- nvmf/common.sh@51 -- # : 0 00:02:25.391 12:24:05 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:25.391 12:24:05 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:25.391 12:24:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:25.391 12:24:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:25.391 12:24:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:25.391 12:24:05 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:25.391 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:25.391 12:24:05 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:25.391 12:24:05 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:25.391 12:24:05 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:25.391 12:24:05 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:25.391 12:24:05 -- spdk/autotest.sh@32 -- # uname -s 00:02:25.391 12:24:05 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:25.391 12:24:05 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:25.391 12:24:05 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:25.391 12:24:05 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:25.391 12:24:05 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:25.391 12:24:05 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:25.391 12:24:05 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:25.391 12:24:05 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:25.391 12:24:05 -- spdk/autotest.sh@48 -- # udevadm_pid=604861 00:02:25.391 12:24:05 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:25.391 12:24:05 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:25.391 12:24:05 -- pm/common@17 -- # local monitor 00:02:25.391 12:24:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.391 12:24:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.391 12:24:05 -- pm/common@21 -- # date +%s 00:02:25.391 12:24:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.391 12:24:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.391 12:24:05 -- pm/common@21 -- # date +%s 00:02:25.391 12:24:05 -- pm/common@21 -- # date +%s 00:02:25.391 12:24:05 -- pm/common@25 -- # sleep 1 00:02:25.391 12:24:05 -- pm/common@21 -- # date +%s 00:02:25.391 12:24:05 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731669845 00:02:25.391 12:24:05 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731669845 00:02:25.391 12:24:05 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731669845 00:02:25.391 12:24:05 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731669845 00:02:25.391 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731669845_collect-cpu-load.pm.log 00:02:25.391 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731669845_collect-cpu-temp.pm.log 00:02:25.391 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731669845_collect-vmstat.pm.log 00:02:25.391 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731669845_collect-bmc-pm.bmc.pm.log 00:02:26.327 12:24:06 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:26.327 12:24:06 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:26.327 12:24:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:26.327 12:24:06 -- common/autotest_common.sh@10 -- # set +x 00:02:26.327 12:24:06 -- spdk/autotest.sh@59 -- # create_test_list 00:02:26.327 12:24:06 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:26.327 12:24:06 -- common/autotest_common.sh@10 -- # set +x 00:02:26.586 12:24:06 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:26.586 12:24:06 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:26.586 12:24:06 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:26.586 12:24:06 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:26.586 12:24:06 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:26.586 12:24:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:26.586 12:24:06 -- common/autotest_common.sh@1457 -- # uname 00:02:26.586 12:24:06 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:26.586 12:24:06 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:26.586 12:24:06 -- common/autotest_common.sh@1477 -- # uname 00:02:26.586 12:24:06 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:26.586 12:24:06 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:26.586 12:24:06 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:02:26.586 lcov: LCOV version 1.15 00:02:26.586 12:24:06 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:02:31.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:02:37.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:42.389 12:24:22 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:42.389 12:24:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:42.389 12:24:22 -- common/autotest_common.sh@10 -- # set +x 00:02:42.389 12:24:22 -- spdk/autotest.sh@78 -- # rm -f 00:02:42.389 12:24:22 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:46.575 0000:1a:00.0 (8086 0a54): Already using the nvme driver 00:02:46.575 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:46.575 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:46.575 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:46.575 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:46.575 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:46.575 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:46.575 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:46.575 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:46.575 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:46.575 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:46.575 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:46.575 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:46.575 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:46.575 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:46.575 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:46.575 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:48.475 12:24:28 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:48.475 12:24:28 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:48.475 12:24:28 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:48.475 12:24:28 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:02:48.475 12:24:28 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:02:48.475 12:24:28 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:02:48.475 12:24:28 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:48.475 12:24:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:48.475 12:24:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:48.475 12:24:28 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:48.475 12:24:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:48.475 12:24:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:48.733 12:24:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:48.733 12:24:28 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:48.733 12:24:28 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:48.733 No valid GPT data, bailing 00:02:48.733 12:24:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:48.733 12:24:28 -- scripts/common.sh@394 -- # pt= 00:02:48.733 12:24:28 -- scripts/common.sh@395 -- # return 1 00:02:48.733 12:24:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:48.733 1+0 records in 00:02:48.733 1+0 records out 00:02:48.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00477942 s, 219 MB/s 00:02:48.733 12:24:28 -- spdk/autotest.sh@105 -- # sync 00:02:48.733 12:24:28 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:48.733 12:24:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:48.733 12:24:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:54.000 12:24:33 -- spdk/autotest.sh@111 -- # uname -s 00:02:54.000 12:24:33 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:54.000 12:24:33 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:02:54.000 12:24:33 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:54.000 12:24:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:54.000 12:24:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:54.000 12:24:33 -- common/autotest_common.sh@10 -- # set +x 00:02:54.000 ************************************ 00:02:54.000 START TEST setup.sh 00:02:54.000 ************************************ 00:02:54.000 12:24:33 setup.sh -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:54.000 * Looking for test storage... 00:02:54.000 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:54.000 12:24:33 setup.sh -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:54.000 12:24:33 setup.sh -- common/autotest_common.sh@1693 -- # lcov --version 00:02:54.000 12:24:33 setup.sh -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:54.000 12:24:33 setup.sh -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@345 -- # : 1 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@353 -- # local d=1 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@355 -- # echo 1 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@353 -- # local d=2 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@355 -- # echo 2 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:54.000 12:24:33 setup.sh -- scripts/common.sh@368 -- # return 0 00:02:54.000 12:24:33 setup.sh -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:54.000 12:24:33 setup.sh -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:54.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.000 --rc genhtml_branch_coverage=1 00:02:54.000 --rc genhtml_function_coverage=1 00:02:54.000 --rc genhtml_legend=1 00:02:54.000 --rc geninfo_all_blocks=1 00:02:54.000 --rc geninfo_unexecuted_blocks=1 00:02:54.000 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:54.000 ' 00:02:54.000 12:24:33 setup.sh -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:54.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.000 --rc genhtml_branch_coverage=1 00:02:54.000 --rc genhtml_function_coverage=1 00:02:54.000 --rc genhtml_legend=1 00:02:54.000 --rc geninfo_all_blocks=1 00:02:54.000 --rc geninfo_unexecuted_blocks=1 00:02:54.000 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:54.000 ' 00:02:54.000 12:24:33 setup.sh -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:54.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.000 --rc genhtml_branch_coverage=1 00:02:54.000 --rc genhtml_function_coverage=1 00:02:54.000 --rc genhtml_legend=1 00:02:54.000 --rc geninfo_all_blocks=1 00:02:54.000 --rc geninfo_unexecuted_blocks=1 00:02:54.000 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:54.000 ' 00:02:54.000 12:24:33 setup.sh -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:54.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.000 --rc genhtml_branch_coverage=1 00:02:54.000 --rc genhtml_function_coverage=1 00:02:54.000 --rc genhtml_legend=1 00:02:54.000 --rc geninfo_all_blocks=1 00:02:54.000 --rc geninfo_unexecuted_blocks=1 00:02:54.000 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:54.000 ' 00:02:54.000 12:24:33 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:54.000 12:24:33 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:54.000 12:24:33 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:54.000 12:24:33 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:54.000 12:24:33 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:54.000 12:24:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:54.000 ************************************ 00:02:54.000 START TEST acl 00:02:54.000 ************************************ 00:02:54.000 12:24:33 setup.sh.acl -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:54.000 * Looking for test storage... 00:02:54.000 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:54.000 12:24:33 setup.sh.acl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:54.000 12:24:33 setup.sh.acl -- common/autotest_common.sh@1693 -- # lcov --version 00:02:54.000 12:24:33 setup.sh.acl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:54.000 12:24:33 setup.sh.acl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:54.000 12:24:33 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:02:54.000 12:24:33 setup.sh.acl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:54.001 12:24:33 setup.sh.acl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:54.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.001 --rc genhtml_branch_coverage=1 00:02:54.001 --rc genhtml_function_coverage=1 00:02:54.001 --rc genhtml_legend=1 00:02:54.001 --rc geninfo_all_blocks=1 00:02:54.001 --rc geninfo_unexecuted_blocks=1 00:02:54.001 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:54.001 ' 00:02:54.001 12:24:33 setup.sh.acl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:54.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.001 --rc genhtml_branch_coverage=1 00:02:54.001 --rc genhtml_function_coverage=1 00:02:54.001 --rc genhtml_legend=1 00:02:54.001 --rc geninfo_all_blocks=1 00:02:54.001 --rc geninfo_unexecuted_blocks=1 00:02:54.001 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:54.001 ' 00:02:54.001 12:24:33 setup.sh.acl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:54.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.001 --rc genhtml_branch_coverage=1 00:02:54.001 --rc genhtml_function_coverage=1 00:02:54.001 --rc genhtml_legend=1 00:02:54.001 --rc geninfo_all_blocks=1 00:02:54.001 --rc geninfo_unexecuted_blocks=1 00:02:54.001 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:54.001 ' 00:02:54.001 12:24:33 setup.sh.acl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:54.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.001 --rc genhtml_branch_coverage=1 00:02:54.001 --rc genhtml_function_coverage=1 00:02:54.001 --rc genhtml_legend=1 00:02:54.001 --rc geninfo_all_blocks=1 00:02:54.001 --rc geninfo_unexecuted_blocks=1 00:02:54.001 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:54.001 ' 00:02:54.001 12:24:33 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:54.001 12:24:33 setup.sh.acl -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:54.001 12:24:33 setup.sh.acl -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:54.001 12:24:33 setup.sh.acl -- common/autotest_common.sh@1658 -- # local nvme bdf 00:02:54.001 12:24:33 setup.sh.acl -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:02:54.001 12:24:33 setup.sh.acl -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:02:54.001 12:24:33 setup.sh.acl -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:54.001 12:24:33 setup.sh.acl -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:54.001 12:24:33 setup.sh.acl -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:54.001 12:24:33 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:54.001 12:24:33 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:54.001 12:24:33 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:54.001 12:24:33 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:54.001 12:24:33 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:54.001 12:24:33 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:54.001 12:24:33 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:00.566 12:24:40 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:00.566 12:24:40 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:00.566 12:24:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.566 12:24:40 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:00.566 12:24:40 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.566 12:24:40 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:03.852 Hugepages 00:03:03.852 node hugesize free / total 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 00:03:03.852 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:1a:00.0 == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:03.852 12:24:43 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:03.852 12:24:43 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:03.852 12:24:43 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:03.852 12:24:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:03.852 ************************************ 00:03:03.852 START TEST denied 00:03:03.852 ************************************ 00:03:03.852 12:24:43 setup.sh.acl.denied -- common/autotest_common.sh@1129 -- # denied 00:03:03.852 12:24:43 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:1a:00.0' 00:03:03.852 12:24:43 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:03.852 12:24:43 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:1a:00.0' 00:03:03.852 12:24:43 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.852 12:24:43 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:10.420 0000:1a:00.0 (8086 0a54): Skipping denied controller at 0000:1a:00.0 00:03:10.420 12:24:50 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:1a:00.0 00:03:10.420 12:24:50 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:10.420 12:24:50 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:10.420 12:24:50 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:1a:00.0 ]] 00:03:10.420 12:24:50 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:1a:00.0/driver 00:03:10.420 12:24:50 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:10.420 12:24:50 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:10.420 12:24:50 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:10.420 12:24:50 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.420 12:24:50 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.541 00:03:18.541 real 0m13.524s 00:03:18.541 user 0m4.190s 00:03:18.541 sys 0m8.529s 00:03:18.541 12:24:57 setup.sh.acl.denied -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:18.541 12:24:57 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:18.541 ************************************ 00:03:18.541 END TEST denied 00:03:18.541 ************************************ 00:03:18.541 12:24:57 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:18.541 12:24:57 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:18.541 12:24:57 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:18.541 12:24:57 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:18.541 ************************************ 00:03:18.541 START TEST allowed 00:03:18.541 ************************************ 00:03:18.541 12:24:57 setup.sh.acl.allowed -- common/autotest_common.sh@1129 -- # allowed 00:03:18.541 12:24:57 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:1a:00.0 00:03:18.541 12:24:57 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:18.541 12:24:57 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:1a:00.0 .*: nvme -> .*' 00:03:18.541 12:24:57 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.541 12:24:57 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:26.656 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:03:26.656 12:25:06 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:26.656 12:25:06 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:26.656 12:25:06 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:26.656 12:25:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.656 12:25:06 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.222 00:03:33.222 real 0m15.595s 00:03:33.222 user 0m4.418s 00:03:33.222 sys 0m7.888s 00:03:33.222 12:25:13 setup.sh.acl.allowed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.222 12:25:13 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:33.222 ************************************ 00:03:33.222 END TEST allowed 00:03:33.222 ************************************ 00:03:33.222 00:03:33.222 real 0m39.557s 00:03:33.222 user 0m12.320s 00:03:33.222 sys 0m23.295s 00:03:33.222 12:25:13 setup.sh.acl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.222 12:25:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:33.222 ************************************ 00:03:33.222 END TEST acl 00:03:33.222 ************************************ 00:03:33.222 12:25:13 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:33.222 12:25:13 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.222 12:25:13 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.222 12:25:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:33.222 ************************************ 00:03:33.222 START TEST hugepages 00:03:33.222 ************************************ 00:03:33.222 12:25:13 setup.sh.hugepages -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:33.222 * Looking for test storage... 00:03:33.222 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:33.222 12:25:13 setup.sh.hugepages -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:33.222 12:25:13 setup.sh.hugepages -- common/autotest_common.sh@1693 -- # lcov --version 00:03:33.222 12:25:13 setup.sh.hugepages -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:33.222 12:25:13 setup.sh.hugepages -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:33.222 12:25:13 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:03:33.222 12:25:13 setup.sh.hugepages -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.222 12:25:13 setup.sh.hugepages -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:33.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.222 --rc genhtml_branch_coverage=1 00:03:33.222 --rc genhtml_function_coverage=1 00:03:33.222 --rc genhtml_legend=1 00:03:33.222 --rc geninfo_all_blocks=1 00:03:33.222 --rc geninfo_unexecuted_blocks=1 00:03:33.222 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:33.222 ' 00:03:33.223 12:25:13 setup.sh.hugepages -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:33.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.223 --rc genhtml_branch_coverage=1 00:03:33.223 --rc genhtml_function_coverage=1 00:03:33.223 --rc genhtml_legend=1 00:03:33.223 --rc geninfo_all_blocks=1 00:03:33.223 --rc geninfo_unexecuted_blocks=1 00:03:33.223 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:33.223 ' 00:03:33.223 12:25:13 setup.sh.hugepages -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:33.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.223 --rc genhtml_branch_coverage=1 00:03:33.223 --rc genhtml_function_coverage=1 00:03:33.223 --rc genhtml_legend=1 00:03:33.223 --rc geninfo_all_blocks=1 00:03:33.223 --rc geninfo_unexecuted_blocks=1 00:03:33.223 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:33.223 ' 00:03:33.223 12:25:13 setup.sh.hugepages -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:33.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.223 --rc genhtml_branch_coverage=1 00:03:33.223 --rc genhtml_function_coverage=1 00:03:33.223 --rc genhtml_legend=1 00:03:33.223 --rc geninfo_all_blocks=1 00:03:33.223 --rc geninfo_unexecuted_blocks=1 00:03:33.223 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:33.223 ' 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 71385540 kB' 'MemAvailable: 76330668 kB' 'Buffers: 10204 kB' 'Cached: 13119792 kB' 'SwapCached: 0 kB' 'Active: 9633972 kB' 'Inactive: 4113404 kB' 'Active(anon): 8476124 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620840 kB' 'Mapped: 186676 kB' 'Shmem: 7858744 kB' 'KReclaimable: 509796 kB' 'Slab: 1115944 kB' 'SReclaimable: 509796 kB' 'SUnreclaim: 606148 kB' 'KernelStack: 17648 kB' 'PageTables: 9076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434164 kB' 'Committed_AS: 9828576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215584 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.223 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.224 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:33.225 12:25:13 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:03:33.225 12:25:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.225 12:25:13 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.225 12:25:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:33.225 ************************************ 00:03:33.225 START TEST single_node_setup 00:03:33.225 ************************************ 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1129 -- # single_node_setup 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.225 12:25:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:36.507 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:36.507 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:36.507 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:36.507 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:36.507 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:36.507 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:36.507 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:36.507 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:36.507 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:36.507 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:36.507 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:36.766 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:36.766 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:36.766 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:36.766 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:36.766 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:40.139 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:03:42.055 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:03:42.055 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:03:42.055 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:03:42.055 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:03:42.055 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:03:42.055 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:03:42.055 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:03:42.055 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.055 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:42.055 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.055 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:42.055 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:42.055 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.055 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.055 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.055 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.055 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.055 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.055 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.055 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 73535176 kB' 'MemAvailable: 78480208 kB' 'Buffers: 10204 kB' 'Cached: 13119972 kB' 'SwapCached: 0 kB' 'Active: 9637492 kB' 'Inactive: 4113404 kB' 'Active(anon): 8479644 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623720 kB' 'Mapped: 186196 kB' 'Shmem: 7858924 kB' 'KReclaimable: 509700 kB' 'Slab: 1115392 kB' 'SReclaimable: 509700 kB' 'SUnreclaim: 605692 kB' 'KernelStack: 17632 kB' 'PageTables: 8968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482740 kB' 'Committed_AS: 9832072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215536 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.056 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 73535184 kB' 'MemAvailable: 78480216 kB' 'Buffers: 10204 kB' 'Cached: 13119976 kB' 'SwapCached: 0 kB' 'Active: 9636384 kB' 'Inactive: 4113404 kB' 'Active(anon): 8478536 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623080 kB' 'Mapped: 186076 kB' 'Shmem: 7858928 kB' 'KReclaimable: 509700 kB' 'Slab: 1115388 kB' 'SReclaimable: 509700 kB' 'SUnreclaim: 605688 kB' 'KernelStack: 17584 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482740 kB' 'Committed_AS: 9832092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215504 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.057 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.058 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.068 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:42.069 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 73535648 kB' 'MemAvailable: 78480680 kB' 'Buffers: 10204 kB' 'Cached: 13119992 kB' 'SwapCached: 0 kB' 'Active: 9636448 kB' 'Inactive: 4113404 kB' 'Active(anon): 8478600 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623136 kB' 'Mapped: 186076 kB' 'Shmem: 7858944 kB' 'KReclaimable: 509700 kB' 'Slab: 1115388 kB' 'SReclaimable: 509700 kB' 'SUnreclaim: 605688 kB' 'KernelStack: 17600 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482740 kB' 'Committed_AS: 9832112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215504 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.331 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.332 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:42.333 nr_hugepages=1024 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:42.333 resv_hugepages=0 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:42.333 surplus_hugepages=0 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:42.333 anon_hugepages=0 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 73535648 kB' 'MemAvailable: 78480680 kB' 'Buffers: 10204 kB' 'Cached: 13120012 kB' 'SwapCached: 0 kB' 'Active: 9636460 kB' 'Inactive: 4113404 kB' 'Active(anon): 8478612 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623128 kB' 'Mapped: 186076 kB' 'Shmem: 7858964 kB' 'KReclaimable: 509700 kB' 'Slab: 1115388 kB' 'SReclaimable: 509700 kB' 'SUnreclaim: 605688 kB' 'KernelStack: 17600 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482740 kB' 'Committed_AS: 9832136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215504 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.334 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 40085952 kB' 'MemUsed: 7978912 kB' 'SwapCached: 0 kB' 'Active: 4242160 kB' 'Inactive: 242532 kB' 'Active(anon): 3367132 kB' 'Inactive(anon): 0 kB' 'Active(file): 875028 kB' 'Inactive(file): 242532 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4001560 kB' 'Mapped: 100660 kB' 'AnonPages: 486504 kB' 'Shmem: 2884000 kB' 'KernelStack: 10664 kB' 'PageTables: 6116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 189100 kB' 'Slab: 547148 kB' 'SReclaimable: 189100 kB' 'SUnreclaim: 358048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:42.337 node0=1024 expecting 1024 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:42.337 00:03:42.337 real 0m9.047s 00:03:42.337 user 0m1.832s 00:03:42.337 sys 0m3.943s 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.337 12:25:22 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:03:42.337 ************************************ 00:03:42.337 END TEST single_node_setup 00:03:42.337 ************************************ 00:03:42.337 12:25:22 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:03:42.337 12:25:22 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.337 12:25:22 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.337 12:25:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.337 ************************************ 00:03:42.337 START TEST even_2G_alloc 00:03:42.337 ************************************ 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1129 -- # even_2G_alloc 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.337 12:25:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:45.618 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:45.618 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:45.618 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:45.618 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:45.618 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:45.618 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:45.618 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:45.618 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:45.618 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:45.618 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:45.618 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:45.618 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:45.618 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:45.618 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:45.618 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:45.618 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:45.618 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 73476060 kB' 'MemAvailable: 78421092 kB' 'Buffers: 10204 kB' 'Cached: 13120156 kB' 'SwapCached: 0 kB' 'Active: 9637500 kB' 'Inactive: 4113404 kB' 'Active(anon): 8479652 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623312 kB' 'Mapped: 185392 kB' 'Shmem: 7859108 kB' 'KReclaimable: 509700 kB' 'Slab: 1115932 kB' 'SReclaimable: 509700 kB' 'SUnreclaim: 606232 kB' 'KernelStack: 17744 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482740 kB' 'Committed_AS: 9826144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215728 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.156 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.157 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 73475548 kB' 'MemAvailable: 78420580 kB' 'Buffers: 10204 kB' 'Cached: 13120160 kB' 'SwapCached: 0 kB' 'Active: 9636556 kB' 'Inactive: 4113404 kB' 'Active(anon): 8478708 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622836 kB' 'Mapped: 185276 kB' 'Shmem: 7859112 kB' 'KReclaimable: 509700 kB' 'Slab: 1115920 kB' 'SReclaimable: 509700 kB' 'SUnreclaim: 606220 kB' 'KernelStack: 17568 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482740 kB' 'Committed_AS: 9823528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215584 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.158 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.159 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 73475652 kB' 'MemAvailable: 78420684 kB' 'Buffers: 10204 kB' 'Cached: 13120180 kB' 'SwapCached: 0 kB' 'Active: 9636480 kB' 'Inactive: 4113404 kB' 'Active(anon): 8478632 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622808 kB' 'Mapped: 185276 kB' 'Shmem: 7859132 kB' 'KReclaimable: 509700 kB' 'Slab: 1115928 kB' 'SReclaimable: 509700 kB' 'SUnreclaim: 606228 kB' 'KernelStack: 17584 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482740 kB' 'Committed_AS: 9823548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215600 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.160 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.161 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:48.162 nr_hugepages=1024 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:48.162 resv_hugepages=0 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:48.162 surplus_hugepages=0 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:48.162 anon_hugepages=0 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.162 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 73476600 kB' 'MemAvailable: 78421632 kB' 'Buffers: 10204 kB' 'Cached: 13120200 kB' 'SwapCached: 0 kB' 'Active: 9636592 kB' 'Inactive: 4113404 kB' 'Active(anon): 8478744 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622376 kB' 'Mapped: 185276 kB' 'Shmem: 7859152 kB' 'KReclaimable: 509700 kB' 'Slab: 1115928 kB' 'SReclaimable: 509700 kB' 'SUnreclaim: 606228 kB' 'KernelStack: 17584 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482740 kB' 'Committed_AS: 9823568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215600 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.164 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 41059140 kB' 'MemUsed: 7005724 kB' 'SwapCached: 0 kB' 'Active: 4242056 kB' 'Inactive: 242532 kB' 'Active(anon): 3367028 kB' 'Inactive(anon): 0 kB' 'Active(file): 875028 kB' 'Inactive(file): 242532 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4001684 kB' 'Mapped: 100192 kB' 'AnonPages: 486104 kB' 'Shmem: 2884124 kB' 'KernelStack: 10680 kB' 'PageTables: 5988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 189100 kB' 'Slab: 547696 kB' 'SReclaimable: 189100 kB' 'SUnreclaim: 358596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.165 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220560 kB' 'MemFree: 32417212 kB' 'MemUsed: 11803348 kB' 'SwapCached: 0 kB' 'Active: 5394892 kB' 'Inactive: 3870872 kB' 'Active(anon): 5112072 kB' 'Inactive(anon): 0 kB' 'Active(file): 282820 kB' 'Inactive(file): 3870872 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9128740 kB' 'Mapped: 85084 kB' 'AnonPages: 137136 kB' 'Shmem: 4975048 kB' 'KernelStack: 6936 kB' 'PageTables: 2812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 320600 kB' 'Slab: 568232 kB' 'SReclaimable: 320600 kB' 'SUnreclaim: 247632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.167 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.427 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:48.427 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:48.427 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:48.427 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:48.427 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:03:48.427 node0=512 expecting 512 00:03:48.427 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:48.427 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:48.427 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:48.427 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:48.427 node1=512 expecting 512 00:03:48.427 12:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:03:48.427 00:03:48.427 real 0m5.942s 00:03:48.427 user 0m1.992s 00:03:48.427 sys 0m3.754s 00:03:48.427 12:25:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.427 12:25:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.427 ************************************ 00:03:48.427 END TEST even_2G_alloc 00:03:48.427 ************************************ 00:03:48.427 12:25:28 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:03:48.427 12:25:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.427 12:25:28 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.427 12:25:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.427 ************************************ 00:03:48.427 START TEST odd_alloc 00:03:48.427 ************************************ 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1129 -- # odd_alloc 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.427 12:25:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:52.618 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.618 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:52.618 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.618 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.618 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.618 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.618 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.618 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.618 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.618 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.618 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.618 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.618 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.618 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.618 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.618 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.618 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 73509288 kB' 'MemAvailable: 78454320 kB' 'Buffers: 10204 kB' 'Cached: 13120372 kB' 'SwapCached: 0 kB' 'Active: 9639124 kB' 'Inactive: 4113404 kB' 'Active(anon): 8481276 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625392 kB' 'Mapped: 185372 kB' 'Shmem: 7859324 kB' 'KReclaimable: 509700 kB' 'Slab: 1115600 kB' 'SReclaimable: 509700 kB' 'SUnreclaim: 605900 kB' 'KernelStack: 17680 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481716 kB' 'Committed_AS: 9827012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215632 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 73508440 kB' 'MemAvailable: 78453472 kB' 'Buffers: 10204 kB' 'Cached: 13120372 kB' 'SwapCached: 0 kB' 'Active: 9639744 kB' 'Inactive: 4113404 kB' 'Active(anon): 8481896 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625948 kB' 'Mapped: 185356 kB' 'Shmem: 7859324 kB' 'KReclaimable: 509700 kB' 'Slab: 1115592 kB' 'SReclaimable: 509700 kB' 'SUnreclaim: 605892 kB' 'KernelStack: 17728 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481716 kB' 'Committed_AS: 9827028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215680 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 73507380 kB' 'MemAvailable: 78452412 kB' 'Buffers: 10204 kB' 'Cached: 13120392 kB' 'SwapCached: 0 kB' 'Active: 9639068 kB' 'Inactive: 4113404 kB' 'Active(anon): 8481220 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625244 kB' 'Mapped: 185356 kB' 'Shmem: 7859344 kB' 'KReclaimable: 509700 kB' 'Slab: 1115584 kB' 'SReclaimable: 509700 kB' 'SUnreclaim: 605884 kB' 'KernelStack: 17728 kB' 'PageTables: 8964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481716 kB' 'Committed_AS: 9827048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215728 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:03:54.537 nr_hugepages=1025 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:54.537 resv_hugepages=0 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:54.537 surplus_hugepages=0 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:54.537 anon_hugepages=0 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 73505848 kB' 'MemAvailable: 78450880 kB' 'Buffers: 10204 kB' 'Cached: 13120412 kB' 'SwapCached: 0 kB' 'Active: 9639008 kB' 'Inactive: 4113404 kB' 'Active(anon): 8481160 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625128 kB' 'Mapped: 185356 kB' 'Shmem: 7859364 kB' 'KReclaimable: 509700 kB' 'Slab: 1115584 kB' 'SReclaimable: 509700 kB' 'SUnreclaim: 605884 kB' 'KernelStack: 17728 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481716 kB' 'Committed_AS: 9826824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215760 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 41078352 kB' 'MemUsed: 6986512 kB' 'SwapCached: 0 kB' 'Active: 4243688 kB' 'Inactive: 242532 kB' 'Active(anon): 3368660 kB' 'Inactive(anon): 0 kB' 'Active(file): 875028 kB' 'Inactive(file): 242532 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4001824 kB' 'Mapped: 100256 kB' 'AnonPages: 487648 kB' 'Shmem: 2884264 kB' 'KernelStack: 10616 kB' 'PageTables: 5852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 189100 kB' 'Slab: 546668 kB' 'SReclaimable: 189100 kB' 'SUnreclaim: 357568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220560 kB' 'MemFree: 32425016 kB' 'MemUsed: 11795544 kB' 'SwapCached: 0 kB' 'Active: 5395296 kB' 'Inactive: 3870872 kB' 'Active(anon): 5112476 kB' 'Inactive(anon): 0 kB' 'Active(file): 282820 kB' 'Inactive(file): 3870872 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9128812 kB' 'Mapped: 85100 kB' 'AnonPages: 137376 kB' 'Shmem: 4975120 kB' 'KernelStack: 6984 kB' 'PageTables: 2884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 320600 kB' 'Slab: 568916 kB' 'SReclaimable: 320600 kB' 'SUnreclaim: 248316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:03:54.542 node0=513 expecting 513 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:54.542 node1=512 expecting 512 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:54.542 00:03:54.542 real 0m6.273s 00:03:54.542 user 0m2.228s 00:03:54.542 sys 0m4.066s 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.542 12:25:34 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:54.542 ************************************ 00:03:54.542 END TEST odd_alloc 00:03:54.542 ************************************ 00:03:54.802 12:25:34 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:03:54.802 12:25:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.802 12:25:34 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.802 12:25:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.802 ************************************ 00:03:54.802 START TEST custom_alloc 00:03:54.802 ************************************ 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1129 -- # custom_alloc 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:54.802 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:54.803 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:54.803 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:03:54.803 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:54.803 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:54.803 12:25:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:03:54.803 12:25:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.803 12:25:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:58.091 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:58.091 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:58.091 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:58.091 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:58.091 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:58.091 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:58.091 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:58.091 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:58.091 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:58.091 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:58.091 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:58.091 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:58.091 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:58.091 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:58.091 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:58.091 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:58.091 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 72456404 kB' 'MemAvailable: 77401436 kB' 'Buffers: 10204 kB' 'Cached: 13120572 kB' 'SwapCached: 0 kB' 'Active: 9640244 kB' 'Inactive: 4113404 kB' 'Active(anon): 8482396 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 626520 kB' 'Mapped: 185468 kB' 'Shmem: 7859524 kB' 'KReclaimable: 509700 kB' 'Slab: 1115044 kB' 'SReclaimable: 509700 kB' 'SUnreclaim: 605344 kB' 'KernelStack: 17824 kB' 'PageTables: 9144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958452 kB' 'Committed_AS: 9826352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215648 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 72455812 kB' 'MemAvailable: 77400812 kB' 'Buffers: 10204 kB' 'Cached: 13120572 kB' 'SwapCached: 0 kB' 'Active: 9639844 kB' 'Inactive: 4113404 kB' 'Active(anon): 8481996 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 626152 kB' 'Mapped: 185420 kB' 'Shmem: 7859524 kB' 'KReclaimable: 509668 kB' 'Slab: 1115056 kB' 'SReclaimable: 509668 kB' 'SUnreclaim: 605388 kB' 'KernelStack: 17648 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958452 kB' 'Committed_AS: 9827872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215648 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 72454852 kB' 'MemAvailable: 77399852 kB' 'Buffers: 10204 kB' 'Cached: 13120592 kB' 'SwapCached: 0 kB' 'Active: 9639704 kB' 'Inactive: 4113404 kB' 'Active(anon): 8481856 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 626016 kB' 'Mapped: 185420 kB' 'Shmem: 7859544 kB' 'KReclaimable: 509668 kB' 'Slab: 1115048 kB' 'SReclaimable: 509668 kB' 'SUnreclaim: 605380 kB' 'KernelStack: 17760 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958452 kB' 'Committed_AS: 9827896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215680 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:04:00.006 nr_hugepages=1536 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:00.006 resv_hugepages=0 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:00.006 surplus_hugepages=0 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:00.006 anon_hugepages=0 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 72457324 kB' 'MemAvailable: 77402324 kB' 'Buffers: 10204 kB' 'Cached: 13120612 kB' 'SwapCached: 0 kB' 'Active: 9640020 kB' 'Inactive: 4113404 kB' 'Active(anon): 8482172 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 626244 kB' 'Mapped: 185420 kB' 'Shmem: 7859564 kB' 'KReclaimable: 509668 kB' 'Slab: 1115048 kB' 'SReclaimable: 509668 kB' 'SUnreclaim: 605380 kB' 'KernelStack: 17584 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958452 kB' 'Committed_AS: 9828876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215664 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.268 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 41105880 kB' 'MemUsed: 6958984 kB' 'SwapCached: 0 kB' 'Active: 4242848 kB' 'Inactive: 242532 kB' 'Active(anon): 3367820 kB' 'Inactive(anon): 0 kB' 'Active(file): 875028 kB' 'Inactive(file): 242532 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4001956 kB' 'Mapped: 100316 kB' 'AnonPages: 486200 kB' 'Shmem: 2884396 kB' 'KernelStack: 10616 kB' 'PageTables: 5776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 189100 kB' 'Slab: 546480 kB' 'SReclaimable: 189100 kB' 'SUnreclaim: 357380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.269 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:00.270 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220560 kB' 'MemFree: 31347348 kB' 'MemUsed: 12873212 kB' 'SwapCached: 0 kB' 'Active: 5397340 kB' 'Inactive: 3870872 kB' 'Active(anon): 5114520 kB' 'Inactive(anon): 0 kB' 'Active(file): 282820 kB' 'Inactive(file): 3870872 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9128884 kB' 'Mapped: 85300 kB' 'AnonPages: 139516 kB' 'Shmem: 4975192 kB' 'KernelStack: 7112 kB' 'PageTables: 3116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 320568 kB' 'Slab: 568512 kB' 'SReclaimable: 320568 kB' 'SUnreclaim: 247944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.271 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:00.272 node0=512 expecting 512 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:04:00.272 node1=1024 expecting 1024 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:00.272 00:04:00.272 real 0m5.531s 00:04:00.272 user 0m1.664s 00:04:00.272 sys 0m3.730s 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.272 12:25:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.272 ************************************ 00:04:00.272 END TEST custom_alloc 00:04:00.272 ************************************ 00:04:00.272 12:25:40 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:00.272 12:25:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.272 12:25:40 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.272 12:25:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.272 ************************************ 00:04:00.272 START TEST no_shrink_alloc 00:04:00.272 ************************************ 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1129 -- # no_shrink_alloc 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.272 12:25:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:03.564 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:03.564 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:03.564 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:03.564 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:03.564 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:03.564 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:03.564 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:03.564 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:03.564 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:03.564 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:03.564 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:03.823 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:03.823 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:03.823 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:03.823 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:03.823 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:03.823 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 73501944 kB' 'MemAvailable: 78446944 kB' 'Buffers: 10204 kB' 'Cached: 13120772 kB' 'SwapCached: 0 kB' 'Active: 9638364 kB' 'Inactive: 4113404 kB' 'Active(anon): 8480516 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624100 kB' 'Mapped: 185488 kB' 'Shmem: 7859724 kB' 'KReclaimable: 509668 kB' 'Slab: 1115400 kB' 'SReclaimable: 509668 kB' 'SUnreclaim: 605732 kB' 'KernelStack: 17616 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482740 kB' 'Committed_AS: 9825932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215648 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.363 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.364 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 73502612 kB' 'MemAvailable: 78447612 kB' 'Buffers: 10204 kB' 'Cached: 13120772 kB' 'SwapCached: 0 kB' 'Active: 9637788 kB' 'Inactive: 4113404 kB' 'Active(anon): 8479940 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623500 kB' 'Mapped: 185488 kB' 'Shmem: 7859724 kB' 'KReclaimable: 509668 kB' 'Slab: 1115496 kB' 'SReclaimable: 509668 kB' 'SUnreclaim: 605828 kB' 'KernelStack: 17584 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482740 kB' 'Committed_AS: 9825948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215632 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 73502372 kB' 'MemAvailable: 78447372 kB' 'Buffers: 10204 kB' 'Cached: 13120796 kB' 'SwapCached: 0 kB' 'Active: 9637584 kB' 'Inactive: 4113404 kB' 'Active(anon): 8479736 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623272 kB' 'Mapped: 185488 kB' 'Shmem: 7859748 kB' 'KReclaimable: 509668 kB' 'Slab: 1115516 kB' 'SReclaimable: 509668 kB' 'SUnreclaim: 605848 kB' 'KernelStack: 17568 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482740 kB' 'Committed_AS: 9825972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215632 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:06.369 nr_hugepages=1024 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:06.369 resv_hugepages=0 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:06.369 surplus_hugepages=0 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:06.369 anon_hugepages=0 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.369 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 73499860 kB' 'MemAvailable: 78444860 kB' 'Buffers: 10204 kB' 'Cached: 13120816 kB' 'SwapCached: 0 kB' 'Active: 9638012 kB' 'Inactive: 4113404 kB' 'Active(anon): 8480164 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623724 kB' 'Mapped: 185488 kB' 'Shmem: 7859768 kB' 'KReclaimable: 509668 kB' 'Slab: 1115516 kB' 'SReclaimable: 509668 kB' 'SUnreclaim: 605848 kB' 'KernelStack: 17584 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482740 kB' 'Committed_AS: 9825992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215632 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.370 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 40060952 kB' 'MemUsed: 8003912 kB' 'SwapCached: 0 kB' 'Active: 4241352 kB' 'Inactive: 242532 kB' 'Active(anon): 3366324 kB' 'Inactive(anon): 0 kB' 'Active(file): 875028 kB' 'Inactive(file): 242532 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4002100 kB' 'Mapped: 100376 kB' 'AnonPages: 484992 kB' 'Shmem: 2884540 kB' 'KernelStack: 10616 kB' 'PageTables: 5728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 189100 kB' 'Slab: 546628 kB' 'SReclaimable: 189100 kB' 'SUnreclaim: 357528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.372 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.373 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:06.374 node0=1024 expecting 1024 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.374 12:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:09.682 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.682 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:09.682 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.682 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.682 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.682 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.682 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.682 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.682 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:09.682 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.682 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.682 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.682 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.682 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.682 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.682 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.682 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:12.219 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:12.219 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:04:12.219 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:12.219 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:12.219 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:12.219 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:12.219 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:12.219 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 73512580 kB' 'MemAvailable: 78457580 kB' 'Buffers: 10204 kB' 'Cached: 13120952 kB' 'SwapCached: 0 kB' 'Active: 9639016 kB' 'Inactive: 4113404 kB' 'Active(anon): 8481168 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624576 kB' 'Mapped: 186648 kB' 'Shmem: 7859904 kB' 'KReclaimable: 509668 kB' 'Slab: 1115292 kB' 'SReclaimable: 509668 kB' 'SUnreclaim: 605624 kB' 'KernelStack: 17616 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482740 kB' 'Committed_AS: 9860768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215664 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.220 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 73513332 kB' 'MemAvailable: 78458332 kB' 'Buffers: 10204 kB' 'Cached: 13120956 kB' 'SwapCached: 0 kB' 'Active: 9639504 kB' 'Inactive: 4113404 kB' 'Active(anon): 8481656 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625096 kB' 'Mapped: 186596 kB' 'Shmem: 7859908 kB' 'KReclaimable: 509668 kB' 'Slab: 1115344 kB' 'SReclaimable: 509668 kB' 'SUnreclaim: 605676 kB' 'KernelStack: 17616 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482740 kB' 'Committed_AS: 9860792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215632 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.221 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.222 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 73511576 kB' 'MemAvailable: 78456576 kB' 'Buffers: 10204 kB' 'Cached: 13120980 kB' 'SwapCached: 0 kB' 'Active: 9639828 kB' 'Inactive: 4113404 kB' 'Active(anon): 8481980 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625428 kB' 'Mapped: 186596 kB' 'Shmem: 7859932 kB' 'KReclaimable: 509668 kB' 'Slab: 1115344 kB' 'SReclaimable: 509668 kB' 'SUnreclaim: 605676 kB' 'KernelStack: 17664 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482740 kB' 'Committed_AS: 9861180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215648 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.223 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.224 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:12.225 nr_hugepages=1024 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:12.225 resv_hugepages=0 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:12.225 surplus_hugepages=0 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:12.225 anon_hugepages=0 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285424 kB' 'MemFree: 73512048 kB' 'MemAvailable: 78457048 kB' 'Buffers: 10204 kB' 'Cached: 13121004 kB' 'SwapCached: 0 kB' 'Active: 9640168 kB' 'Inactive: 4113404 kB' 'Active(anon): 8482320 kB' 'Inactive(anon): 0 kB' 'Active(file): 1157848 kB' 'Inactive(file): 4113404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625744 kB' 'Mapped: 186596 kB' 'Shmem: 7859956 kB' 'KReclaimable: 509668 kB' 'Slab: 1115344 kB' 'SReclaimable: 509668 kB' 'SUnreclaim: 605676 kB' 'KernelStack: 17680 kB' 'PageTables: 8788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482740 kB' 'Committed_AS: 9863844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215648 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 810424 kB' 'DirectMap2M: 20885504 kB' 'DirectMap1G: 80740352 kB' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.225 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.226 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 40047556 kB' 'MemUsed: 8017308 kB' 'SwapCached: 0 kB' 'Active: 4242048 kB' 'Inactive: 242532 kB' 'Active(anon): 3367020 kB' 'Inactive(anon): 0 kB' 'Active(file): 875028 kB' 'Inactive(file): 242532 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4002212 kB' 'Mapped: 101396 kB' 'AnonPages: 485540 kB' 'Shmem: 2884652 kB' 'KernelStack: 10648 kB' 'PageTables: 5840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 189100 kB' 'Slab: 546500 kB' 'SReclaimable: 189100 kB' 'SUnreclaim: 357400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:12.229 node0=1024 expecting 1024 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:12.229 00:04:12.229 real 0m11.707s 00:04:12.229 user 0m3.961s 00:04:12.229 sys 0m7.789s 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.229 12:25:52 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:12.229 ************************************ 00:04:12.229 END TEST no_shrink_alloc 00:04:12.229 ************************************ 00:04:12.229 12:25:52 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:04:12.229 12:25:52 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:12.229 12:25:52 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:12.229 12:25:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:12.229 12:25:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:12.229 12:25:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:12.229 12:25:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:12.229 12:25:52 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:12.229 12:25:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:12.229 12:25:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:12.229 12:25:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:12.229 12:25:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:12.229 12:25:52 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:12.229 12:25:52 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:12.229 00:04:12.229 real 0m39.042s 00:04:12.229 user 0m11.921s 00:04:12.229 sys 0m23.627s 00:04:12.229 12:25:52 setup.sh.hugepages -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.229 12:25:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:12.229 ************************************ 00:04:12.229 END TEST hugepages 00:04:12.229 ************************************ 00:04:12.229 12:25:52 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:12.229 12:25:52 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.229 12:25:52 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.229 12:25:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:12.229 ************************************ 00:04:12.229 START TEST driver 00:04:12.229 ************************************ 00:04:12.229 12:25:52 setup.sh.driver -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:12.229 * Looking for test storage... 00:04:12.229 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:12.229 12:25:52 setup.sh.driver -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:12.229 12:25:52 setup.sh.driver -- common/autotest_common.sh@1693 -- # lcov --version 00:04:12.229 12:25:52 setup.sh.driver -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:12.229 12:25:52 setup.sh.driver -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.229 12:25:52 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:04:12.229 12:25:52 setup.sh.driver -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.229 12:25:52 setup.sh.driver -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:12.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.229 --rc genhtml_branch_coverage=1 00:04:12.229 --rc genhtml_function_coverage=1 00:04:12.229 --rc genhtml_legend=1 00:04:12.229 --rc geninfo_all_blocks=1 00:04:12.229 --rc geninfo_unexecuted_blocks=1 00:04:12.229 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:12.229 ' 00:04:12.229 12:25:52 setup.sh.driver -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:12.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.229 --rc genhtml_branch_coverage=1 00:04:12.229 --rc genhtml_function_coverage=1 00:04:12.229 --rc genhtml_legend=1 00:04:12.229 --rc geninfo_all_blocks=1 00:04:12.229 --rc geninfo_unexecuted_blocks=1 00:04:12.230 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:12.230 ' 00:04:12.230 12:25:52 setup.sh.driver -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:12.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.230 --rc genhtml_branch_coverage=1 00:04:12.230 --rc genhtml_function_coverage=1 00:04:12.230 --rc genhtml_legend=1 00:04:12.230 --rc geninfo_all_blocks=1 00:04:12.230 --rc geninfo_unexecuted_blocks=1 00:04:12.230 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:12.230 ' 00:04:12.230 12:25:52 setup.sh.driver -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:12.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.230 --rc genhtml_branch_coverage=1 00:04:12.230 --rc genhtml_function_coverage=1 00:04:12.230 --rc genhtml_legend=1 00:04:12.230 --rc geninfo_all_blocks=1 00:04:12.230 --rc geninfo_unexecuted_blocks=1 00:04:12.230 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:12.230 ' 00:04:12.230 12:25:52 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:12.230 12:25:52 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:12.230 12:25:52 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.345 12:25:59 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:20.345 12:25:59 setup.sh.driver -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.345 12:25:59 setup.sh.driver -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.345 12:25:59 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:20.345 ************************************ 00:04:20.345 START TEST guess_driver 00:04:20.345 ************************************ 00:04:20.345 12:25:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@1129 -- # guess_driver 00:04:20.345 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:20.345 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:20.345 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:20.345 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:20.345 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:20.345 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:20.345 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:20.345 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:20.345 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:20.345 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 238 > 0 )) 00:04:20.345 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:20.345 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:20.345 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:20.345 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:20.345 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:20.345 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:20.345 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:20.345 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:20.345 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:20.345 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:20.345 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:20.345 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:20.345 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:20.345 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:20.345 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:20.346 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:20.346 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:20.346 Looking for driver=vfio-pci 00:04:20.346 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.346 12:25:59 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:20.346 12:25:59 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.346 12:25:59 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:22.882 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.882 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.882 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.882 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.882 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.882 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.882 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.882 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.882 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.882 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.882 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.882 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.882 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.882 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.882 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.882 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.882 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.883 12:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.883 12:26:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.883 12:26:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.883 12:26:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.175 12:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.175 12:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.175 12:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.081 12:26:08 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:28.081 12:26:08 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:28.081 12:26:08 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:28.081 12:26:08 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.195 00:04:36.195 real 0m15.979s 00:04:36.195 user 0m3.813s 00:04:36.195 sys 0m8.144s 00:04:36.195 12:26:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.195 12:26:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:36.195 ************************************ 00:04:36.195 END TEST guess_driver 00:04:36.195 ************************************ 00:04:36.195 00:04:36.195 real 0m23.099s 00:04:36.195 user 0m5.860s 00:04:36.195 sys 0m12.389s 00:04:36.195 12:26:15 setup.sh.driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.195 12:26:15 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:36.195 ************************************ 00:04:36.195 END TEST driver 00:04:36.195 ************************************ 00:04:36.195 12:26:15 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:36.195 12:26:15 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.195 12:26:15 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.195 12:26:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:36.195 ************************************ 00:04:36.195 START TEST devices 00:04:36.195 ************************************ 00:04:36.195 12:26:15 setup.sh.devices -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:36.195 * Looking for test storage... 00:04:36.195 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:36.195 12:26:15 setup.sh.devices -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:36.195 12:26:15 setup.sh.devices -- common/autotest_common.sh@1693 -- # lcov --version 00:04:36.195 12:26:15 setup.sh.devices -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:36.195 12:26:15 setup.sh.devices -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.195 12:26:15 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:04:36.195 12:26:15 setup.sh.devices -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.196 12:26:15 setup.sh.devices -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:36.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.196 --rc genhtml_branch_coverage=1 00:04:36.196 --rc genhtml_function_coverage=1 00:04:36.196 --rc genhtml_legend=1 00:04:36.196 --rc geninfo_all_blocks=1 00:04:36.196 --rc geninfo_unexecuted_blocks=1 00:04:36.196 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:36.196 ' 00:04:36.196 12:26:15 setup.sh.devices -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:36.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.196 --rc genhtml_branch_coverage=1 00:04:36.196 --rc genhtml_function_coverage=1 00:04:36.196 --rc genhtml_legend=1 00:04:36.196 --rc geninfo_all_blocks=1 00:04:36.196 --rc geninfo_unexecuted_blocks=1 00:04:36.196 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:36.196 ' 00:04:36.196 12:26:15 setup.sh.devices -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:36.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.196 --rc genhtml_branch_coverage=1 00:04:36.196 --rc genhtml_function_coverage=1 00:04:36.196 --rc genhtml_legend=1 00:04:36.196 --rc geninfo_all_blocks=1 00:04:36.196 --rc geninfo_unexecuted_blocks=1 00:04:36.196 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:36.196 ' 00:04:36.196 12:26:15 setup.sh.devices -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:36.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.196 --rc genhtml_branch_coverage=1 00:04:36.196 --rc genhtml_function_coverage=1 00:04:36.196 --rc genhtml_legend=1 00:04:36.196 --rc geninfo_all_blocks=1 00:04:36.196 --rc geninfo_unexecuted_blocks=1 00:04:36.196 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:36.196 ' 00:04:36.196 12:26:15 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:36.196 12:26:15 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:36.196 12:26:15 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:36.196 12:26:15 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:42.759 12:26:22 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:42.759 12:26:22 setup.sh.devices -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:42.759 12:26:22 setup.sh.devices -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:42.760 12:26:22 setup.sh.devices -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:42.760 12:26:22 setup.sh.devices -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:42.760 12:26:22 setup.sh.devices -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:42.760 12:26:22 setup.sh.devices -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:42.760 12:26:22 setup.sh.devices -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:42.760 12:26:22 setup.sh.devices -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:42.760 12:26:22 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:42.760 12:26:22 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:42.760 12:26:22 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:42.760 12:26:22 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:42.760 12:26:22 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:42.760 12:26:22 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:42.760 12:26:22 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:42.760 12:26:22 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:42.760 12:26:22 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:1a:00.0 00:04:42.760 12:26:22 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:04:42.760 12:26:22 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:42.760 12:26:22 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:04:42.760 12:26:22 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:42.760 No valid GPT data, bailing 00:04:42.760 12:26:22 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:42.760 12:26:22 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:04:42.760 12:26:22 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:04:42.760 12:26:22 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:42.760 12:26:22 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:42.760 12:26:22 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:42.760 12:26:22 setup.sh.devices -- setup/common.sh@80 -- # echo 4000787030016 00:04:42.760 12:26:22 setup.sh.devices -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:04:42.760 12:26:22 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:42.760 12:26:22 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:1a:00.0 00:04:42.760 12:26:22 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:42.760 12:26:22 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:42.760 12:26:22 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:42.760 12:26:22 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.760 12:26:22 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.760 12:26:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:42.760 ************************************ 00:04:42.760 START TEST nvme_mount 00:04:42.760 ************************************ 00:04:42.760 12:26:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1129 -- # nvme_mount 00:04:42.760 12:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:42.760 12:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:42.760 12:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.760 12:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:42.760 12:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:42.760 12:26:22 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:42.760 12:26:22 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:42.760 12:26:22 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:42.760 12:26:22 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:42.760 12:26:22 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:42.760 12:26:22 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:42.760 12:26:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:42.760 12:26:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.760 12:26:22 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:42.760 12:26:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:42.760 12:26:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.760 12:26:22 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:42.760 12:26:22 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:42.760 12:26:22 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:43.020 Creating new GPT entries in memory. 00:04:43.020 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:43.020 other utilities. 00:04:43.020 12:26:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:43.020 12:26:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.020 12:26:23 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:43.020 12:26:23 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:43.020 12:26:23 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:43.958 Creating new GPT entries in memory. 00:04:43.958 The operation has completed successfully. 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 639680 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.958 12:26:24 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.276 12:26:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.276 12:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.276 12:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.276 12:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.276 12:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.276 12:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.276 12:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.276 12:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.276 12:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.276 12:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.276 12:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.276 12:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.276 12:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.276 12:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:48.276 12:26:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.181 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.181 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:50.181 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.181 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:50.181 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:50.181 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:50.181 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.181 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.181 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.181 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:50.181 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:50.181 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.181 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:50.440 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:50.440 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:50.440 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:50.440 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:50.440 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:50.440 12:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:50.440 12:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.440 12:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:50.440 12:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:50.440 12:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.440 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:50.440 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:04:50.440 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:50.440 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.440 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:50.441 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:50.441 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:50.441 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:50.441 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:50.441 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.441 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:04:50.441 12:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:50.441 12:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.441 12:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:53.727 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:53.727 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:53.727 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:53.727 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.727 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:53.727 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:53.728 12:26:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.263 12:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.263 12:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:56.263 12:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.263 12:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.263 12:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:56.263 12:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.263 12:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:1a:00.0 data@nvme0n1 '' '' 00:04:56.263 12:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:04:56.263 12:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:56.263 12:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:56.263 12:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:56.263 12:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:56.263 12:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:56.263 12:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:56.263 12:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.263 12:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:04:56.263 12:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:56.263 12:26:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.263 12:26:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:59.563 12:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.102 12:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.102 12:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:02.102 12:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:02.102 12:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:02.102 12:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.102 12:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:02.102 12:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:02.102 12:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:02.102 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:02.102 00:05:02.102 real 0m20.139s 00:05:02.102 user 0m5.681s 00:05:02.102 sys 0m11.808s 00:05:02.102 12:26:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.102 12:26:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:02.102 ************************************ 00:05:02.102 END TEST nvme_mount 00:05:02.102 ************************************ 00:05:02.102 12:26:42 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:02.102 12:26:42 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.102 12:26:42 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.102 12:26:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:02.102 ************************************ 00:05:02.102 START TEST dm_mount 00:05:02.102 ************************************ 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@1129 -- # dm_mount 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:02.102 12:26:42 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:03.041 Creating new GPT entries in memory. 00:05:03.041 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:03.041 other utilities. 00:05:03.041 12:26:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:03.041 12:26:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.041 12:26:43 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:03.041 12:26:43 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:03.041 12:26:43 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:04.421 Creating new GPT entries in memory. 00:05:04.421 The operation has completed successfully. 00:05:04.421 12:26:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:04.421 12:26:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:04.421 12:26:44 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:04.421 12:26:44 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:04.421 12:26:44 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:05.358 The operation has completed successfully. 00:05:05.358 12:26:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:05.358 12:26:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:05.358 12:26:45 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 645047 00:05:05.358 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:05.358 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:05.358 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:05.358 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:05.358 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:05.358 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:05.358 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:05.358 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:05.358 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:05.358 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:05.358 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:05.358 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:05.358 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:05.358 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:05.358 12:26:45 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:05:05.359 12:26:45 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:05.359 12:26:45 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:05.359 12:26:45 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:05.359 12:26:45 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:05.359 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:1a:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:05.359 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:05.359 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:05.359 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:05.359 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:05.359 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:05.359 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:05.359 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:05.359 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:05.359 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.359 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:05.359 12:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:05.359 12:26:45 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.359 12:26:45 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.548 12:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.449 12:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:11.449 12:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:11.449 12:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:11.449 12:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:11.449 12:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:11.449 12:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:11.449 12:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:1a:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:11.449 12:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:11.449 12:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:11.449 12:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:11.449 12:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:11.449 12:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:11.449 12:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:11.449 12:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:11.449 12:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:11.449 12:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.449 12:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:11.449 12:26:51 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.449 12:26:51 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.743 12:26:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.650 12:26:56 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:16.650 12:26:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:16.650 12:26:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:16.650 12:26:56 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:16.650 12:26:56 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:16.650 12:26:56 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:16.650 12:26:56 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:16.650 12:26:56 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:16.650 12:26:56 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:16.650 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:16.650 12:26:56 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:16.650 12:26:56 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:16.650 00:05:16.650 real 0m14.518s 00:05:16.650 user 0m3.547s 00:05:16.650 sys 0m7.745s 00:05:16.650 12:26:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.650 12:26:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:16.650 ************************************ 00:05:16.650 END TEST dm_mount 00:05:16.650 ************************************ 00:05:16.650 12:26:56 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:16.650 12:26:56 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:16.650 12:26:56 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.650 12:26:56 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:16.650 12:26:56 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:16.650 12:26:56 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:16.650 12:26:56 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:16.909 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:16.909 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:16.909 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:16.909 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:16.909 12:26:57 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:16.909 12:26:57 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:16.909 12:26:57 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:16.909 12:26:57 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:16.909 12:26:57 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:16.909 12:26:57 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:16.909 12:26:57 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:16.909 00:05:16.909 real 0m41.657s 00:05:16.909 user 0m11.653s 00:05:16.909 sys 0m24.009s 00:05:16.909 12:26:57 setup.sh.devices -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.909 12:26:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:16.909 ************************************ 00:05:16.909 END TEST devices 00:05:16.909 ************************************ 00:05:16.909 00:05:16.909 real 2m23.872s 00:05:16.909 user 0m41.953s 00:05:16.909 sys 1m23.677s 00:05:16.909 12:26:57 setup.sh -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.909 12:26:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:16.909 ************************************ 00:05:16.909 END TEST setup.sh 00:05:16.909 ************************************ 00:05:17.168 12:26:57 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:21.362 Hugepages 00:05:21.362 node hugesize free / total 00:05:21.362 node0 1048576kB 0 / 0 00:05:21.362 node0 2048kB 1024 / 1024 00:05:21.362 node1 1048576kB 0 / 0 00:05:21.362 node1 2048kB 1024 / 1024 00:05:21.362 00:05:21.362 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:21.362 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:21.362 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:21.362 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:21.362 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:21.362 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:21.362 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:21.362 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:21.362 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:21.362 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:21.362 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:21.362 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:21.362 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:21.362 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:21.362 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:21.362 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:21.363 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:21.363 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:21.363 12:27:01 -- spdk/autotest.sh@117 -- # uname -s 00:05:21.363 12:27:01 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:21.363 12:27:01 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:21.363 12:27:01 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:24.653 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:24.653 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:24.653 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:24.653 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:24.653 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:24.653 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:24.653 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:24.653 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:24.653 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:24.911 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:24.911 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:24.911 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:24.911 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:24.911 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:24.911 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:24.911 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:28.202 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:05:30.104 12:27:10 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:31.481 12:27:11 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:31.481 12:27:11 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:31.481 12:27:11 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:31.481 12:27:11 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:31.481 12:27:11 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:31.482 12:27:11 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:31.482 12:27:11 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:31.482 12:27:11 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:31.482 12:27:11 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:31.482 12:27:11 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:31.482 12:27:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:1a:00.0 00:05:31.482 12:27:11 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:35.673 Waiting for block devices as requested 00:05:35.673 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:05:35.673 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:35.673 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:35.673 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:35.673 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:35.673 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:35.673 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:35.673 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:35.673 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:35.932 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:35.932 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:35.932 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:36.191 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:36.191 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:36.191 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:36.449 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:36.449 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:38.986 12:27:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:38.986 12:27:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:1a:00.0 00:05:38.986 12:27:18 -- common/autotest_common.sh@1487 -- # grep 0000:1a:00.0/nvme/nvme 00:05:38.986 12:27:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:38.986 12:27:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:05:38.986 12:27:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 ]] 00:05:38.986 12:27:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:05:38.986 12:27:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:38.986 12:27:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:38.986 12:27:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:38.986 12:27:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:38.986 12:27:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:38.986 12:27:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:38.986 12:27:18 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:05:38.986 12:27:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:38.986 12:27:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:38.986 12:27:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:38.986 12:27:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:38.986 12:27:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:38.986 12:27:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:38.986 12:27:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:38.986 12:27:18 -- common/autotest_common.sh@1543 -- # continue 00:05:38.986 12:27:18 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:38.986 12:27:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:38.986 12:27:18 -- common/autotest_common.sh@10 -- # set +x 00:05:38.986 12:27:19 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:38.986 12:27:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.986 12:27:19 -- common/autotest_common.sh@10 -- # set +x 00:05:38.986 12:27:19 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:43.181 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:43.181 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:43.182 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:43.182 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:43.182 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:43.182 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:43.182 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:43.182 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:43.182 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:43.182 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:43.182 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:43.182 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:43.182 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:43.182 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:43.182 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:43.182 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:46.466 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:05:48.370 12:27:28 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:48.370 12:27:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:48.370 12:27:28 -- common/autotest_common.sh@10 -- # set +x 00:05:48.370 12:27:28 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:48.370 12:27:28 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:48.370 12:27:28 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:48.370 12:27:28 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:48.370 12:27:28 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:48.370 12:27:28 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:48.370 12:27:28 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:48.370 12:27:28 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:48.370 12:27:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:48.370 12:27:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:48.370 12:27:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:48.370 12:27:28 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:48.370 12:27:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:48.370 12:27:28 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:48.370 12:27:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:1a:00.0 00:05:48.370 12:27:28 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:48.371 12:27:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:1a:00.0/device 00:05:48.371 12:27:28 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:48.371 12:27:28 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:48.371 12:27:28 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:48.371 12:27:28 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:48.371 12:27:28 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:1a:00.0 00:05:48.371 12:27:28 -- common/autotest_common.sh@1579 -- # [[ -z 0000:1a:00.0 ]] 00:05:48.371 12:27:28 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=656760 00:05:48.371 12:27:28 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.371 12:27:28 -- common/autotest_common.sh@1585 -- # waitforlisten 656760 00:05:48.371 12:27:28 -- common/autotest_common.sh@835 -- # '[' -z 656760 ']' 00:05:48.371 12:27:28 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.371 12:27:28 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.371 12:27:28 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.371 12:27:28 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.371 12:27:28 -- common/autotest_common.sh@10 -- # set +x 00:05:48.371 [2024-11-15 12:27:28.615299] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:48.371 [2024-11-15 12:27:28.615402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid656760 ] 00:05:48.371 [2024-11-15 12:27:28.701993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.629 [2024-11-15 12:27:28.751825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.888 12:27:28 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.888 12:27:28 -- common/autotest_common.sh@868 -- # return 0 00:05:48.888 12:27:28 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:48.888 12:27:28 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:48.888 12:27:28 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:1a:00.0 00:05:52.250 nvme0n1 00:05:52.250 12:27:32 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:52.250 [2024-11-15 12:27:32.192479] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:52.250 request: 00:05:52.250 { 00:05:52.250 "nvme_ctrlr_name": "nvme0", 00:05:52.250 "password": "test", 00:05:52.250 "method": "bdev_nvme_opal_revert", 00:05:52.250 "req_id": 1 00:05:52.250 } 00:05:52.250 Got JSON-RPC error response 00:05:52.250 response: 00:05:52.250 { 00:05:52.250 "code": -32602, 00:05:52.250 "message": "Invalid parameters" 00:05:52.250 } 00:05:52.250 12:27:32 -- common/autotest_common.sh@1591 -- # true 00:05:52.250 12:27:32 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:52.250 12:27:32 -- common/autotest_common.sh@1595 -- # killprocess 656760 00:05:52.250 12:27:32 -- common/autotest_common.sh@954 -- # '[' -z 656760 ']' 00:05:52.250 12:27:32 -- common/autotest_common.sh@958 -- # kill -0 656760 00:05:52.250 12:27:32 -- common/autotest_common.sh@959 -- # uname 00:05:52.250 12:27:32 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.251 12:27:32 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 656760 00:05:52.251 12:27:32 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.251 12:27:32 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.251 12:27:32 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 656760' 00:05:52.251 killing process with pid 656760 00:05:52.251 12:27:32 -- common/autotest_common.sh@973 -- # kill 656760 00:05:52.251 12:27:32 -- common/autotest_common.sh@978 -- # wait 656760 00:05:56.441 12:27:36 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:56.441 12:27:36 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:56.441 12:27:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:56.441 12:27:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:56.441 12:27:36 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:56.441 12:27:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.441 12:27:36 -- common/autotest_common.sh@10 -- # set +x 00:05:56.441 12:27:36 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:56.441 12:27:36 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:56.441 12:27:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.441 12:27:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.441 12:27:36 -- common/autotest_common.sh@10 -- # set +x 00:05:56.441 ************************************ 00:05:56.441 START TEST env 00:05:56.441 ************************************ 00:05:56.441 12:27:36 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:56.441 * Looking for test storage... 00:05:56.441 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:56.441 12:27:36 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:56.441 12:27:36 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:56.441 12:27:36 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:56.441 12:27:36 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:56.441 12:27:36 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.441 12:27:36 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.441 12:27:36 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.441 12:27:36 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.441 12:27:36 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.441 12:27:36 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.441 12:27:36 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.441 12:27:36 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.441 12:27:36 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.441 12:27:36 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.441 12:27:36 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.441 12:27:36 env -- scripts/common.sh@344 -- # case "$op" in 00:05:56.441 12:27:36 env -- scripts/common.sh@345 -- # : 1 00:05:56.441 12:27:36 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.441 12:27:36 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.441 12:27:36 env -- scripts/common.sh@365 -- # decimal 1 00:05:56.441 12:27:36 env -- scripts/common.sh@353 -- # local d=1 00:05:56.441 12:27:36 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.441 12:27:36 env -- scripts/common.sh@355 -- # echo 1 00:05:56.441 12:27:36 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.441 12:27:36 env -- scripts/common.sh@366 -- # decimal 2 00:05:56.441 12:27:36 env -- scripts/common.sh@353 -- # local d=2 00:05:56.441 12:27:36 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.441 12:27:36 env -- scripts/common.sh@355 -- # echo 2 00:05:56.441 12:27:36 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.441 12:27:36 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.441 12:27:36 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.441 12:27:36 env -- scripts/common.sh@368 -- # return 0 00:05:56.441 12:27:36 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.441 12:27:36 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:56.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.441 --rc genhtml_branch_coverage=1 00:05:56.441 --rc genhtml_function_coverage=1 00:05:56.441 --rc genhtml_legend=1 00:05:56.441 --rc geninfo_all_blocks=1 00:05:56.441 --rc geninfo_unexecuted_blocks=1 00:05:56.441 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:56.441 ' 00:05:56.441 12:27:36 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:56.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.441 --rc genhtml_branch_coverage=1 00:05:56.441 --rc genhtml_function_coverage=1 00:05:56.441 --rc genhtml_legend=1 00:05:56.441 --rc geninfo_all_blocks=1 00:05:56.441 --rc geninfo_unexecuted_blocks=1 00:05:56.441 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:56.441 ' 00:05:56.441 12:27:36 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:56.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.441 --rc genhtml_branch_coverage=1 00:05:56.441 --rc genhtml_function_coverage=1 00:05:56.441 --rc genhtml_legend=1 00:05:56.441 --rc geninfo_all_blocks=1 00:05:56.441 --rc geninfo_unexecuted_blocks=1 00:05:56.441 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:56.441 ' 00:05:56.441 12:27:36 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:56.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.441 --rc genhtml_branch_coverage=1 00:05:56.441 --rc genhtml_function_coverage=1 00:05:56.441 --rc genhtml_legend=1 00:05:56.441 --rc geninfo_all_blocks=1 00:05:56.441 --rc geninfo_unexecuted_blocks=1 00:05:56.441 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:56.441 ' 00:05:56.441 12:27:36 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:56.441 12:27:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.441 12:27:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.441 12:27:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.441 ************************************ 00:05:56.441 START TEST env_memory 00:05:56.441 ************************************ 00:05:56.441 12:27:36 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:56.441 00:05:56.441 00:05:56.441 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.441 http://cunit.sourceforge.net/ 00:05:56.441 00:05:56.441 00:05:56.441 Suite: memory 00:05:56.441 Test: alloc and free memory map ...[2024-11-15 12:27:36.560179] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:56.441 passed 00:05:56.441 Test: mem map translation ...[2024-11-15 12:27:36.573760] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:56.442 [2024-11-15 12:27:36.573788] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:56.442 [2024-11-15 12:27:36.573818] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:56.442 [2024-11-15 12:27:36.573828] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:56.442 passed 00:05:56.442 Test: mem map registration ...[2024-11-15 12:27:36.594235] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:56.442 [2024-11-15 12:27:36.594249] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:56.442 passed 00:05:56.442 Test: mem map adjacent registrations ...passed 00:05:56.442 00:05:56.442 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.442 suites 1 1 n/a 0 0 00:05:56.442 tests 4 4 4 0 0 00:05:56.442 asserts 152 152 152 0 n/a 00:05:56.442 00:05:56.442 Elapsed time = 0.086 seconds 00:05:56.442 00:05:56.442 real 0m0.099s 00:05:56.442 user 0m0.087s 00:05:56.442 sys 0m0.012s 00:05:56.442 12:27:36 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.442 12:27:36 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:56.442 ************************************ 00:05:56.442 END TEST env_memory 00:05:56.442 ************************************ 00:05:56.442 12:27:36 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:56.442 12:27:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.442 12:27:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.442 12:27:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.442 ************************************ 00:05:56.442 START TEST env_vtophys 00:05:56.442 ************************************ 00:05:56.442 12:27:36 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:56.442 EAL: lib.eal log level changed from notice to debug 00:05:56.442 EAL: Detected lcore 0 as core 0 on socket 0 00:05:56.442 EAL: Detected lcore 1 as core 1 on socket 0 00:05:56.442 EAL: Detected lcore 2 as core 2 on socket 0 00:05:56.442 EAL: Detected lcore 3 as core 3 on socket 0 00:05:56.442 EAL: Detected lcore 4 as core 4 on socket 0 00:05:56.442 EAL: Detected lcore 5 as core 8 on socket 0 00:05:56.442 EAL: Detected lcore 6 as core 9 on socket 0 00:05:56.442 EAL: Detected lcore 7 as core 10 on socket 0 00:05:56.442 EAL: Detected lcore 8 as core 11 on socket 0 00:05:56.442 EAL: Detected lcore 9 as core 16 on socket 0 00:05:56.442 EAL: Detected lcore 10 as core 17 on socket 0 00:05:56.442 EAL: Detected lcore 11 as core 18 on socket 0 00:05:56.442 EAL: Detected lcore 12 as core 19 on socket 0 00:05:56.442 EAL: Detected lcore 13 as core 20 on socket 0 00:05:56.442 EAL: Detected lcore 14 as core 24 on socket 0 00:05:56.442 EAL: Detected lcore 15 as core 25 on socket 0 00:05:56.442 EAL: Detected lcore 16 as core 26 on socket 0 00:05:56.442 EAL: Detected lcore 17 as core 27 on socket 0 00:05:56.442 EAL: Detected lcore 18 as core 0 on socket 1 00:05:56.442 EAL: Detected lcore 19 as core 1 on socket 1 00:05:56.442 EAL: Detected lcore 20 as core 2 on socket 1 00:05:56.442 EAL: Detected lcore 21 as core 3 on socket 1 00:05:56.442 EAL: Detected lcore 22 as core 4 on socket 1 00:05:56.442 EAL: Detected lcore 23 as core 8 on socket 1 00:05:56.442 EAL: Detected lcore 24 as core 9 on socket 1 00:05:56.442 EAL: Detected lcore 25 as core 10 on socket 1 00:05:56.442 EAL: Detected lcore 26 as core 11 on socket 1 00:05:56.442 EAL: Detected lcore 27 as core 16 on socket 1 00:05:56.442 EAL: Detected lcore 28 as core 17 on socket 1 00:05:56.442 EAL: Detected lcore 29 as core 18 on socket 1 00:05:56.442 EAL: Detected lcore 30 as core 19 on socket 1 00:05:56.442 EAL: Detected lcore 31 as core 20 on socket 1 00:05:56.442 EAL: Detected lcore 32 as core 24 on socket 1 00:05:56.442 EAL: Detected lcore 33 as core 25 on socket 1 00:05:56.442 EAL: Detected lcore 34 as core 26 on socket 1 00:05:56.442 EAL: Detected lcore 35 as core 27 on socket 1 00:05:56.442 EAL: Detected lcore 36 as core 0 on socket 0 00:05:56.442 EAL: Detected lcore 37 as core 1 on socket 0 00:05:56.442 EAL: Detected lcore 38 as core 2 on socket 0 00:05:56.442 EAL: Detected lcore 39 as core 3 on socket 0 00:05:56.442 EAL: Detected lcore 40 as core 4 on socket 0 00:05:56.442 EAL: Detected lcore 41 as core 8 on socket 0 00:05:56.442 EAL: Detected lcore 42 as core 9 on socket 0 00:05:56.442 EAL: Detected lcore 43 as core 10 on socket 0 00:05:56.442 EAL: Detected lcore 44 as core 11 on socket 0 00:05:56.442 EAL: Detected lcore 45 as core 16 on socket 0 00:05:56.442 EAL: Detected lcore 46 as core 17 on socket 0 00:05:56.442 EAL: Detected lcore 47 as core 18 on socket 0 00:05:56.442 EAL: Detected lcore 48 as core 19 on socket 0 00:05:56.442 EAL: Detected lcore 49 as core 20 on socket 0 00:05:56.442 EAL: Detected lcore 50 as core 24 on socket 0 00:05:56.442 EAL: Detected lcore 51 as core 25 on socket 0 00:05:56.442 EAL: Detected lcore 52 as core 26 on socket 0 00:05:56.442 EAL: Detected lcore 53 as core 27 on socket 0 00:05:56.442 EAL: Detected lcore 54 as core 0 on socket 1 00:05:56.442 EAL: Detected lcore 55 as core 1 on socket 1 00:05:56.442 EAL: Detected lcore 56 as core 2 on socket 1 00:05:56.442 EAL: Detected lcore 57 as core 3 on socket 1 00:05:56.442 EAL: Detected lcore 58 as core 4 on socket 1 00:05:56.442 EAL: Detected lcore 59 as core 8 on socket 1 00:05:56.442 EAL: Detected lcore 60 as core 9 on socket 1 00:05:56.442 EAL: Detected lcore 61 as core 10 on socket 1 00:05:56.442 EAL: Detected lcore 62 as core 11 on socket 1 00:05:56.442 EAL: Detected lcore 63 as core 16 on socket 1 00:05:56.442 EAL: Detected lcore 64 as core 17 on socket 1 00:05:56.442 EAL: Detected lcore 65 as core 18 on socket 1 00:05:56.442 EAL: Detected lcore 66 as core 19 on socket 1 00:05:56.442 EAL: Detected lcore 67 as core 20 on socket 1 00:05:56.442 EAL: Detected lcore 68 as core 24 on socket 1 00:05:56.442 EAL: Detected lcore 69 as core 25 on socket 1 00:05:56.442 EAL: Detected lcore 70 as core 26 on socket 1 00:05:56.442 EAL: Detected lcore 71 as core 27 on socket 1 00:05:56.442 EAL: Maximum logical cores by configuration: 128 00:05:56.442 EAL: Detected CPU lcores: 72 00:05:56.442 EAL: Detected NUMA nodes: 2 00:05:56.442 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:56.442 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:56.442 EAL: Checking presence of .so 'librte_eal.so' 00:05:56.442 EAL: Detected static linkage of DPDK 00:05:56.442 EAL: No shared files mode enabled, IPC will be disabled 00:05:56.442 EAL: Bus pci wants IOVA as 'DC' 00:05:56.442 EAL: Buses did not request a specific IOVA mode. 00:05:56.442 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:56.442 EAL: Selected IOVA mode 'VA' 00:05:56.442 EAL: Probing VFIO support... 00:05:56.442 EAL: IOMMU type 1 (Type 1) is supported 00:05:56.442 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:56.442 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:56.442 EAL: VFIO support initialized 00:05:56.442 EAL: Ask a virtual area of 0x2e000 bytes 00:05:56.442 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:56.442 EAL: Setting up physically contiguous memory... 00:05:56.442 EAL: Setting maximum number of open files to 524288 00:05:56.442 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:56.442 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:56.442 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:56.442 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.442 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:56.442 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:56.442 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.442 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:56.442 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:56.442 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.442 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:56.442 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:56.442 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.442 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:56.442 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:56.442 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.442 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:56.442 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:56.442 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.442 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:56.442 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:56.442 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.442 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:56.442 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:56.442 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.442 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:56.442 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:56.442 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:56.442 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.442 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:56.442 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:56.442 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.442 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:56.442 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:56.442 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.442 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:56.442 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:56.442 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.442 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:56.442 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:56.442 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.442 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:56.442 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:56.442 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.442 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:56.442 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:56.443 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.443 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:56.443 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:56.443 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.443 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:56.443 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:56.443 EAL: Hugepages will be freed exactly as allocated. 00:05:56.443 EAL: No shared files mode enabled, IPC is disabled 00:05:56.443 EAL: No shared files mode enabled, IPC is disabled 00:05:56.443 EAL: TSC frequency is ~2300000 KHz 00:05:56.443 EAL: Main lcore 0 is ready (tid=7f7abb911a00;cpuset=[0]) 00:05:56.443 EAL: Trying to obtain current memory policy. 00:05:56.443 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.443 EAL: Restoring previous memory policy: 0 00:05:56.443 EAL: request: mp_malloc_sync 00:05:56.443 EAL: No shared files mode enabled, IPC is disabled 00:05:56.443 EAL: Heap on socket 0 was expanded by 2MB 00:05:56.443 EAL: No shared files mode enabled, IPC is disabled 00:05:56.701 EAL: Mem event callback 'spdk:(nil)' registered 00:05:56.701 00:05:56.701 00:05:56.701 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.701 http://cunit.sourceforge.net/ 00:05:56.701 00:05:56.702 00:05:56.702 Suite: components_suite 00:05:56.702 Test: vtophys_malloc_test ...passed 00:05:56.702 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:56.702 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.702 EAL: Restoring previous memory policy: 4 00:05:56.702 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.702 EAL: request: mp_malloc_sync 00:05:56.702 EAL: No shared files mode enabled, IPC is disabled 00:05:56.702 EAL: Heap on socket 0 was expanded by 4MB 00:05:56.702 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.702 EAL: request: mp_malloc_sync 00:05:56.702 EAL: No shared files mode enabled, IPC is disabled 00:05:56.702 EAL: Heap on socket 0 was shrunk by 4MB 00:05:56.702 EAL: Trying to obtain current memory policy. 00:05:56.702 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.702 EAL: Restoring previous memory policy: 4 00:05:56.702 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.702 EAL: request: mp_malloc_sync 00:05:56.702 EAL: No shared files mode enabled, IPC is disabled 00:05:56.702 EAL: Heap on socket 0 was expanded by 6MB 00:05:56.702 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.702 EAL: request: mp_malloc_sync 00:05:56.702 EAL: No shared files mode enabled, IPC is disabled 00:05:56.702 EAL: Heap on socket 0 was shrunk by 6MB 00:05:56.702 EAL: Trying to obtain current memory policy. 00:05:56.702 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.702 EAL: Restoring previous memory policy: 4 00:05:56.702 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.702 EAL: request: mp_malloc_sync 00:05:56.702 EAL: No shared files mode enabled, IPC is disabled 00:05:56.702 EAL: Heap on socket 0 was expanded by 10MB 00:05:56.702 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.702 EAL: request: mp_malloc_sync 00:05:56.702 EAL: No shared files mode enabled, IPC is disabled 00:05:56.702 EAL: Heap on socket 0 was shrunk by 10MB 00:05:56.702 EAL: Trying to obtain current memory policy. 00:05:56.702 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.702 EAL: Restoring previous memory policy: 4 00:05:56.702 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.702 EAL: request: mp_malloc_sync 00:05:56.702 EAL: No shared files mode enabled, IPC is disabled 00:05:56.702 EAL: Heap on socket 0 was expanded by 18MB 00:05:56.702 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.702 EAL: request: mp_malloc_sync 00:05:56.702 EAL: No shared files mode enabled, IPC is disabled 00:05:56.702 EAL: Heap on socket 0 was shrunk by 18MB 00:05:56.702 EAL: Trying to obtain current memory policy. 00:05:56.702 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.702 EAL: Restoring previous memory policy: 4 00:05:56.702 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.702 EAL: request: mp_malloc_sync 00:05:56.702 EAL: No shared files mode enabled, IPC is disabled 00:05:56.702 EAL: Heap on socket 0 was expanded by 34MB 00:05:56.702 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.702 EAL: request: mp_malloc_sync 00:05:56.702 EAL: No shared files mode enabled, IPC is disabled 00:05:56.702 EAL: Heap on socket 0 was shrunk by 34MB 00:05:56.702 EAL: Trying to obtain current memory policy. 00:05:56.702 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.702 EAL: Restoring previous memory policy: 4 00:05:56.702 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.702 EAL: request: mp_malloc_sync 00:05:56.702 EAL: No shared files mode enabled, IPC is disabled 00:05:56.702 EAL: Heap on socket 0 was expanded by 66MB 00:05:56.702 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.702 EAL: request: mp_malloc_sync 00:05:56.702 EAL: No shared files mode enabled, IPC is disabled 00:05:56.702 EAL: Heap on socket 0 was shrunk by 66MB 00:05:56.702 EAL: Trying to obtain current memory policy. 00:05:56.702 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.702 EAL: Restoring previous memory policy: 4 00:05:56.702 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.702 EAL: request: mp_malloc_sync 00:05:56.702 EAL: No shared files mode enabled, IPC is disabled 00:05:56.702 EAL: Heap on socket 0 was expanded by 130MB 00:05:56.702 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.702 EAL: request: mp_malloc_sync 00:05:56.702 EAL: No shared files mode enabled, IPC is disabled 00:05:56.702 EAL: Heap on socket 0 was shrunk by 130MB 00:05:56.702 EAL: Trying to obtain current memory policy. 00:05:56.702 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.702 EAL: Restoring previous memory policy: 4 00:05:56.702 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.702 EAL: request: mp_malloc_sync 00:05:56.702 EAL: No shared files mode enabled, IPC is disabled 00:05:56.702 EAL: Heap on socket 0 was expanded by 258MB 00:05:56.702 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.961 EAL: request: mp_malloc_sync 00:05:56.961 EAL: No shared files mode enabled, IPC is disabled 00:05:56.961 EAL: Heap on socket 0 was shrunk by 258MB 00:05:56.961 EAL: Trying to obtain current memory policy. 00:05:56.961 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.961 EAL: Restoring previous memory policy: 4 00:05:56.961 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.961 EAL: request: mp_malloc_sync 00:05:56.961 EAL: No shared files mode enabled, IPC is disabled 00:05:56.961 EAL: Heap on socket 0 was expanded by 514MB 00:05:56.961 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.220 EAL: request: mp_malloc_sync 00:05:57.220 EAL: No shared files mode enabled, IPC is disabled 00:05:57.220 EAL: Heap on socket 0 was shrunk by 514MB 00:05:57.220 EAL: Trying to obtain current memory policy. 00:05:57.220 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.479 EAL: Restoring previous memory policy: 4 00:05:57.479 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.479 EAL: request: mp_malloc_sync 00:05:57.479 EAL: No shared files mode enabled, IPC is disabled 00:05:57.479 EAL: Heap on socket 0 was expanded by 1026MB 00:05:57.479 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.738 EAL: request: mp_malloc_sync 00:05:57.738 EAL: No shared files mode enabled, IPC is disabled 00:05:57.738 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:57.738 passed 00:05:57.738 00:05:57.738 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.738 suites 1 1 n/a 0 0 00:05:57.738 tests 2 2 2 0 0 00:05:57.738 asserts 497 497 497 0 n/a 00:05:57.738 00:05:57.738 Elapsed time = 1.104 seconds 00:05:57.738 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.738 EAL: request: mp_malloc_sync 00:05:57.738 EAL: No shared files mode enabled, IPC is disabled 00:05:57.738 EAL: Heap on socket 0 was shrunk by 2MB 00:05:57.738 EAL: No shared files mode enabled, IPC is disabled 00:05:57.738 EAL: No shared files mode enabled, IPC is disabled 00:05:57.738 EAL: No shared files mode enabled, IPC is disabled 00:05:57.738 00:05:57.738 real 0m1.234s 00:05:57.738 user 0m0.715s 00:05:57.738 sys 0m0.489s 00:05:57.738 12:27:37 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.738 12:27:37 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:57.738 ************************************ 00:05:57.738 END TEST env_vtophys 00:05:57.738 ************************************ 00:05:57.738 12:27:37 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:57.738 12:27:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.738 12:27:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.738 12:27:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:57.738 ************************************ 00:05:57.738 START TEST env_pci 00:05:57.738 ************************************ 00:05:57.738 12:27:38 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:57.738 00:05:57.738 00:05:57.738 CUnit - A unit testing framework for C - Version 2.1-3 00:05:57.738 http://cunit.sourceforge.net/ 00:05:57.738 00:05:57.738 00:05:57.738 Suite: pci 00:05:57.738 Test: pci_hook ...[2024-11-15 12:27:38.026031] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1118:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 658091 has claimed it 00:05:57.738 EAL: Cannot find device (10000:00:01.0) 00:05:57.738 EAL: Failed to attach device on primary process 00:05:57.738 passed 00:05:57.738 00:05:57.738 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.738 suites 1 1 n/a 0 0 00:05:57.738 tests 1 1 1 0 0 00:05:57.738 asserts 25 25 25 0 n/a 00:05:57.738 00:05:57.738 Elapsed time = 0.040 seconds 00:05:57.738 00:05:57.738 real 0m0.060s 00:05:57.738 user 0m0.018s 00:05:57.738 sys 0m0.042s 00:05:57.738 12:27:38 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.738 12:27:38 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:57.738 ************************************ 00:05:57.738 END TEST env_pci 00:05:57.738 ************************************ 00:05:57.997 12:27:38 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:57.997 12:27:38 env -- env/env.sh@15 -- # uname 00:05:57.997 12:27:38 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:57.997 12:27:38 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:57.997 12:27:38 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:57.997 12:27:38 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:57.997 12:27:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.997 12:27:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:57.997 ************************************ 00:05:57.997 START TEST env_dpdk_post_init 00:05:57.997 ************************************ 00:05:57.997 12:27:38 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:57.997 EAL: Detected CPU lcores: 72 00:05:57.997 EAL: Detected NUMA nodes: 2 00:05:57.997 EAL: Detected static linkage of DPDK 00:05:57.997 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:57.997 EAL: Selected IOVA mode 'VA' 00:05:57.997 EAL: VFIO support initialized 00:05:57.997 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:57.997 EAL: Using IOMMU type 1 (Type 1) 00:05:58.932 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:1a:00.0 (socket 0) 00:06:04.195 EAL: Releasing PCI mapped resource for 0000:1a:00.0 00:06:04.195 EAL: Calling pci_unmap_resource for 0000:1a:00.0 at 0x202001000000 00:06:04.452 Starting DPDK initialization... 00:06:04.452 Starting SPDK post initialization... 00:06:04.452 SPDK NVMe probe 00:06:04.452 Attaching to 0000:1a:00.0 00:06:04.452 Attached to 0000:1a:00.0 00:06:04.452 Cleaning up... 00:06:04.452 00:06:04.452 real 0m6.557s 00:06:04.452 user 0m4.770s 00:06:04.452 sys 0m1.037s 00:06:04.452 12:27:44 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.453 12:27:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:04.453 ************************************ 00:06:04.453 END TEST env_dpdk_post_init 00:06:04.453 ************************************ 00:06:04.453 12:27:44 env -- env/env.sh@26 -- # uname 00:06:04.453 12:27:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:04.453 12:27:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:04.453 12:27:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.453 12:27:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.453 12:27:44 env -- common/autotest_common.sh@10 -- # set +x 00:06:04.710 ************************************ 00:06:04.710 START TEST env_mem_callbacks 00:06:04.710 ************************************ 00:06:04.710 12:27:44 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:04.710 EAL: Detected CPU lcores: 72 00:06:04.710 EAL: Detected NUMA nodes: 2 00:06:04.710 EAL: Detected static linkage of DPDK 00:06:04.710 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:04.711 EAL: Selected IOVA mode 'VA' 00:06:04.711 EAL: VFIO support initialized 00:06:04.711 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:04.711 00:06:04.711 00:06:04.711 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.711 http://cunit.sourceforge.net/ 00:06:04.711 00:06:04.711 00:06:04.711 Suite: memory 00:06:04.711 Test: test ... 00:06:04.711 register 0x200000200000 2097152 00:06:04.711 malloc 3145728 00:06:04.711 register 0x200000400000 4194304 00:06:04.711 buf 0x200000500000 len 3145728 PASSED 00:06:04.711 malloc 64 00:06:04.711 buf 0x2000004fff40 len 64 PASSED 00:06:04.711 malloc 4194304 00:06:04.711 register 0x200000800000 6291456 00:06:04.711 buf 0x200000a00000 len 4194304 PASSED 00:06:04.711 free 0x200000500000 3145728 00:06:04.711 free 0x2000004fff40 64 00:06:04.711 unregister 0x200000400000 4194304 PASSED 00:06:04.711 free 0x200000a00000 4194304 00:06:04.711 unregister 0x200000800000 6291456 PASSED 00:06:04.711 malloc 8388608 00:06:04.711 register 0x200000400000 10485760 00:06:04.711 buf 0x200000600000 len 8388608 PASSED 00:06:04.711 free 0x200000600000 8388608 00:06:04.711 unregister 0x200000400000 10485760 PASSED 00:06:04.711 passed 00:06:04.711 00:06:04.711 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.711 suites 1 1 n/a 0 0 00:06:04.711 tests 1 1 1 0 0 00:06:04.711 asserts 15 15 15 0 n/a 00:06:04.711 00:06:04.711 Elapsed time = 0.007 seconds 00:06:04.711 00:06:04.711 real 0m0.075s 00:06:04.711 user 0m0.023s 00:06:04.711 sys 0m0.051s 00:06:04.711 12:27:44 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.711 12:27:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:04.711 ************************************ 00:06:04.711 END TEST env_mem_callbacks 00:06:04.711 ************************************ 00:06:04.711 00:06:04.711 real 0m8.627s 00:06:04.711 user 0m5.858s 00:06:04.711 sys 0m2.029s 00:06:04.711 12:27:44 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.711 12:27:44 env -- common/autotest_common.sh@10 -- # set +x 00:06:04.711 ************************************ 00:06:04.711 END TEST env 00:06:04.711 ************************************ 00:06:04.711 12:27:44 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:06:04.711 12:27:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.711 12:27:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.711 12:27:44 -- common/autotest_common.sh@10 -- # set +x 00:06:04.711 ************************************ 00:06:04.711 START TEST rpc 00:06:04.711 ************************************ 00:06:04.711 12:27:45 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:06:04.969 * Looking for test storage... 00:06:04.969 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:04.969 12:27:45 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.969 12:27:45 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.969 12:27:45 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.969 12:27:45 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.969 12:27:45 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.969 12:27:45 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.969 12:27:45 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.969 12:27:45 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.969 12:27:45 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.969 12:27:45 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.969 12:27:45 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.969 12:27:45 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.969 12:27:45 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.969 12:27:45 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.969 12:27:45 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.969 12:27:45 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:04.969 12:27:45 rpc -- scripts/common.sh@345 -- # : 1 00:06:04.969 12:27:45 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.969 12:27:45 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.969 12:27:45 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:04.969 12:27:45 rpc -- scripts/common.sh@353 -- # local d=1 00:06:04.969 12:27:45 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.969 12:27:45 rpc -- scripts/common.sh@355 -- # echo 1 00:06:04.969 12:27:45 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.969 12:27:45 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:04.969 12:27:45 rpc -- scripts/common.sh@353 -- # local d=2 00:06:04.969 12:27:45 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.969 12:27:45 rpc -- scripts/common.sh@355 -- # echo 2 00:06:04.969 12:27:45 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.969 12:27:45 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.969 12:27:45 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.969 12:27:45 rpc -- scripts/common.sh@368 -- # return 0 00:06:04.969 12:27:45 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.969 12:27:45 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.969 --rc genhtml_branch_coverage=1 00:06:04.969 --rc genhtml_function_coverage=1 00:06:04.969 --rc genhtml_legend=1 00:06:04.969 --rc geninfo_all_blocks=1 00:06:04.969 --rc geninfo_unexecuted_blocks=1 00:06:04.969 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:04.969 ' 00:06:04.969 12:27:45 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.969 --rc genhtml_branch_coverage=1 00:06:04.969 --rc genhtml_function_coverage=1 00:06:04.969 --rc genhtml_legend=1 00:06:04.969 --rc geninfo_all_blocks=1 00:06:04.969 --rc geninfo_unexecuted_blocks=1 00:06:04.969 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:04.969 ' 00:06:04.969 12:27:45 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.969 --rc genhtml_branch_coverage=1 00:06:04.969 --rc genhtml_function_coverage=1 00:06:04.969 --rc genhtml_legend=1 00:06:04.969 --rc geninfo_all_blocks=1 00:06:04.969 --rc geninfo_unexecuted_blocks=1 00:06:04.969 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:04.969 ' 00:06:04.969 12:27:45 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.969 --rc genhtml_branch_coverage=1 00:06:04.969 --rc genhtml_function_coverage=1 00:06:04.969 --rc genhtml_legend=1 00:06:04.969 --rc geninfo_all_blocks=1 00:06:04.969 --rc geninfo_unexecuted_blocks=1 00:06:04.969 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:04.969 ' 00:06:04.969 12:27:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=659154 00:06:04.969 12:27:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.969 12:27:45 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:04.969 12:27:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 659154 00:06:04.969 12:27:45 rpc -- common/autotest_common.sh@835 -- # '[' -z 659154 ']' 00:06:04.969 12:27:45 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.969 12:27:45 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.969 12:27:45 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.969 12:27:45 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.969 12:27:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.969 [2024-11-15 12:27:45.233829] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:04.969 [2024-11-15 12:27:45.233905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659154 ] 00:06:05.226 [2024-11-15 12:27:45.322717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.226 [2024-11-15 12:27:45.366609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:05.226 [2024-11-15 12:27:45.366655] app.c: 616:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 659154' to capture a snapshot of events at runtime. 00:06:05.226 [2024-11-15 12:27:45.366667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:05.226 [2024-11-15 12:27:45.366691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:05.226 [2024-11-15 12:27:45.366698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid659154 for offline analysis/debug. 00:06:05.226 [2024-11-15 12:27:45.367188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.484 12:27:45 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.484 12:27:45 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.484 12:27:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:05.484 12:27:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:05.484 12:27:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:05.484 12:27:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:05.484 12:27:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.484 12:27:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.484 12:27:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.484 ************************************ 00:06:05.484 START TEST rpc_integrity 00:06:05.484 ************************************ 00:06:05.484 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:05.484 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:05.484 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.484 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.484 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.484 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:05.484 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:05.484 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:05.484 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:05.484 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.484 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.484 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.484 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:05.484 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:05.484 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.484 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.484 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.484 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:05.484 { 00:06:05.484 "name": "Malloc0", 00:06:05.484 "aliases": [ 00:06:05.484 "e9dec97f-7111-421b-b6f2-8cd9625c1e54" 00:06:05.484 ], 00:06:05.484 "product_name": "Malloc disk", 00:06:05.484 "block_size": 512, 00:06:05.484 "num_blocks": 16384, 00:06:05.484 "uuid": "e9dec97f-7111-421b-b6f2-8cd9625c1e54", 00:06:05.484 "assigned_rate_limits": { 00:06:05.484 "rw_ios_per_sec": 0, 00:06:05.484 "rw_mbytes_per_sec": 0, 00:06:05.484 "r_mbytes_per_sec": 0, 00:06:05.484 "w_mbytes_per_sec": 0 00:06:05.484 }, 00:06:05.484 "claimed": false, 00:06:05.484 "zoned": false, 00:06:05.484 "supported_io_types": { 00:06:05.484 "read": true, 00:06:05.484 "write": true, 00:06:05.484 "unmap": true, 00:06:05.484 "flush": true, 00:06:05.484 "reset": true, 00:06:05.484 "nvme_admin": false, 00:06:05.484 "nvme_io": false, 00:06:05.484 "nvme_io_md": false, 00:06:05.484 "write_zeroes": true, 00:06:05.484 "zcopy": true, 00:06:05.484 "get_zone_info": false, 00:06:05.484 "zone_management": false, 00:06:05.484 "zone_append": false, 00:06:05.484 "compare": false, 00:06:05.484 "compare_and_write": false, 00:06:05.484 "abort": true, 00:06:05.484 "seek_hole": false, 00:06:05.484 "seek_data": false, 00:06:05.484 "copy": true, 00:06:05.484 "nvme_iov_md": false 00:06:05.484 }, 00:06:05.484 "memory_domains": [ 00:06:05.484 { 00:06:05.484 "dma_device_id": "system", 00:06:05.484 "dma_device_type": 1 00:06:05.484 }, 00:06:05.484 { 00:06:05.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.484 "dma_device_type": 2 00:06:05.484 } 00:06:05.484 ], 00:06:05.484 "driver_specific": {} 00:06:05.484 } 00:06:05.484 ]' 00:06:05.484 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:05.484 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:05.484 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:05.484 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.484 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.484 [2024-11-15 12:27:45.763499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:05.484 [2024-11-15 12:27:45.763543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:05.484 [2024-11-15 12:27:45.763561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4e1ed10 00:06:05.484 [2024-11-15 12:27:45.763570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:05.484 [2024-11-15 12:27:45.764499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:05.484 [2024-11-15 12:27:45.764522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:05.484 Passthru0 00:06:05.484 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.484 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:05.484 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.484 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.484 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.484 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:05.484 { 00:06:05.484 "name": "Malloc0", 00:06:05.484 "aliases": [ 00:06:05.484 "e9dec97f-7111-421b-b6f2-8cd9625c1e54" 00:06:05.484 ], 00:06:05.484 "product_name": "Malloc disk", 00:06:05.484 "block_size": 512, 00:06:05.484 "num_blocks": 16384, 00:06:05.484 "uuid": "e9dec97f-7111-421b-b6f2-8cd9625c1e54", 00:06:05.484 "assigned_rate_limits": { 00:06:05.484 "rw_ios_per_sec": 0, 00:06:05.484 "rw_mbytes_per_sec": 0, 00:06:05.484 "r_mbytes_per_sec": 0, 00:06:05.484 "w_mbytes_per_sec": 0 00:06:05.484 }, 00:06:05.484 "claimed": true, 00:06:05.484 "claim_type": "exclusive_write", 00:06:05.484 "zoned": false, 00:06:05.484 "supported_io_types": { 00:06:05.484 "read": true, 00:06:05.484 "write": true, 00:06:05.484 "unmap": true, 00:06:05.484 "flush": true, 00:06:05.484 "reset": true, 00:06:05.484 "nvme_admin": false, 00:06:05.484 "nvme_io": false, 00:06:05.484 "nvme_io_md": false, 00:06:05.484 "write_zeroes": true, 00:06:05.484 "zcopy": true, 00:06:05.484 "get_zone_info": false, 00:06:05.484 "zone_management": false, 00:06:05.484 "zone_append": false, 00:06:05.484 "compare": false, 00:06:05.484 "compare_and_write": false, 00:06:05.484 "abort": true, 00:06:05.484 "seek_hole": false, 00:06:05.484 "seek_data": false, 00:06:05.484 "copy": true, 00:06:05.484 "nvme_iov_md": false 00:06:05.484 }, 00:06:05.484 "memory_domains": [ 00:06:05.484 { 00:06:05.484 "dma_device_id": "system", 00:06:05.484 "dma_device_type": 1 00:06:05.484 }, 00:06:05.484 { 00:06:05.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.484 "dma_device_type": 2 00:06:05.484 } 00:06:05.484 ], 00:06:05.484 "driver_specific": {} 00:06:05.484 }, 00:06:05.484 { 00:06:05.484 "name": "Passthru0", 00:06:05.484 "aliases": [ 00:06:05.484 "5a9a1227-3922-5b0f-9aaf-2d7a8a4b1e53" 00:06:05.484 ], 00:06:05.484 "product_name": "passthru", 00:06:05.484 "block_size": 512, 00:06:05.484 "num_blocks": 16384, 00:06:05.484 "uuid": "5a9a1227-3922-5b0f-9aaf-2d7a8a4b1e53", 00:06:05.484 "assigned_rate_limits": { 00:06:05.484 "rw_ios_per_sec": 0, 00:06:05.484 "rw_mbytes_per_sec": 0, 00:06:05.484 "r_mbytes_per_sec": 0, 00:06:05.484 "w_mbytes_per_sec": 0 00:06:05.484 }, 00:06:05.484 "claimed": false, 00:06:05.484 "zoned": false, 00:06:05.484 "supported_io_types": { 00:06:05.484 "read": true, 00:06:05.484 "write": true, 00:06:05.484 "unmap": true, 00:06:05.484 "flush": true, 00:06:05.484 "reset": true, 00:06:05.484 "nvme_admin": false, 00:06:05.484 "nvme_io": false, 00:06:05.484 "nvme_io_md": false, 00:06:05.484 "write_zeroes": true, 00:06:05.484 "zcopy": true, 00:06:05.484 "get_zone_info": false, 00:06:05.484 "zone_management": false, 00:06:05.484 "zone_append": false, 00:06:05.484 "compare": false, 00:06:05.484 "compare_and_write": false, 00:06:05.484 "abort": true, 00:06:05.484 "seek_hole": false, 00:06:05.484 "seek_data": false, 00:06:05.484 "copy": true, 00:06:05.484 "nvme_iov_md": false 00:06:05.484 }, 00:06:05.484 "memory_domains": [ 00:06:05.484 { 00:06:05.484 "dma_device_id": "system", 00:06:05.484 "dma_device_type": 1 00:06:05.484 }, 00:06:05.484 { 00:06:05.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.484 "dma_device_type": 2 00:06:05.484 } 00:06:05.484 ], 00:06:05.484 "driver_specific": { 00:06:05.484 "passthru": { 00:06:05.484 "name": "Passthru0", 00:06:05.484 "base_bdev_name": "Malloc0" 00:06:05.484 } 00:06:05.484 } 00:06:05.484 } 00:06:05.484 ]' 00:06:05.484 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:05.769 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:05.769 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:05.769 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.769 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.769 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.769 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:05.769 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.769 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.769 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.769 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:05.769 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.769 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.769 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.769 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:05.769 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:05.769 12:27:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:05.769 00:06:05.769 real 0m0.294s 00:06:05.769 user 0m0.183s 00:06:05.769 sys 0m0.052s 00:06:05.769 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.769 12:27:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.769 ************************************ 00:06:05.769 END TEST rpc_integrity 00:06:05.769 ************************************ 00:06:05.769 12:27:45 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:05.769 12:27:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.769 12:27:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.769 12:27:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.769 ************************************ 00:06:05.769 START TEST rpc_plugins 00:06:05.769 ************************************ 00:06:05.769 12:27:45 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:05.769 12:27:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:05.769 12:27:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.769 12:27:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.769 12:27:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.769 12:27:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:05.769 12:27:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:05.769 12:27:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.769 12:27:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.769 12:27:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.769 12:27:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:05.769 { 00:06:05.769 "name": "Malloc1", 00:06:05.769 "aliases": [ 00:06:05.769 "fc998126-a189-47c6-9f95-e52fb8d27c46" 00:06:05.769 ], 00:06:05.769 "product_name": "Malloc disk", 00:06:05.769 "block_size": 4096, 00:06:05.769 "num_blocks": 256, 00:06:05.769 "uuid": "fc998126-a189-47c6-9f95-e52fb8d27c46", 00:06:05.769 "assigned_rate_limits": { 00:06:05.769 "rw_ios_per_sec": 0, 00:06:05.769 "rw_mbytes_per_sec": 0, 00:06:05.769 "r_mbytes_per_sec": 0, 00:06:05.769 "w_mbytes_per_sec": 0 00:06:05.769 }, 00:06:05.769 "claimed": false, 00:06:05.769 "zoned": false, 00:06:05.769 "supported_io_types": { 00:06:05.769 "read": true, 00:06:05.769 "write": true, 00:06:05.769 "unmap": true, 00:06:05.769 "flush": true, 00:06:05.769 "reset": true, 00:06:05.769 "nvme_admin": false, 00:06:05.769 "nvme_io": false, 00:06:05.769 "nvme_io_md": false, 00:06:05.769 "write_zeroes": true, 00:06:05.769 "zcopy": true, 00:06:05.769 "get_zone_info": false, 00:06:05.769 "zone_management": false, 00:06:05.769 "zone_append": false, 00:06:05.769 "compare": false, 00:06:05.769 "compare_and_write": false, 00:06:05.769 "abort": true, 00:06:05.769 "seek_hole": false, 00:06:05.769 "seek_data": false, 00:06:05.769 "copy": true, 00:06:05.769 "nvme_iov_md": false 00:06:05.769 }, 00:06:05.769 "memory_domains": [ 00:06:05.769 { 00:06:05.769 "dma_device_id": "system", 00:06:05.769 "dma_device_type": 1 00:06:05.769 }, 00:06:05.769 { 00:06:05.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.769 "dma_device_type": 2 00:06:05.769 } 00:06:05.769 ], 00:06:05.769 "driver_specific": {} 00:06:05.769 } 00:06:05.769 ]' 00:06:05.769 12:27:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:05.769 12:27:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:05.769 12:27:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:05.769 12:27:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.769 12:27:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.769 12:27:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.769 12:27:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:05.769 12:27:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.769 12:27:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.769 12:27:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.769 12:27:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:05.769 12:27:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:06.026 12:27:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:06.026 00:06:06.026 real 0m0.134s 00:06:06.026 user 0m0.077s 00:06:06.026 sys 0m0.028s 00:06:06.026 12:27:46 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.026 12:27:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.026 ************************************ 00:06:06.026 END TEST rpc_plugins 00:06:06.026 ************************************ 00:06:06.026 12:27:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:06.026 12:27:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.026 12:27:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.026 12:27:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.026 ************************************ 00:06:06.026 START TEST rpc_trace_cmd_test 00:06:06.026 ************************************ 00:06:06.026 12:27:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:06.026 12:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:06.026 12:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:06.026 12:27:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.026 12:27:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.026 12:27:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.026 12:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:06.026 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid659154", 00:06:06.026 "tpoint_group_mask": "0x8", 00:06:06.026 "iscsi_conn": { 00:06:06.026 "mask": "0x2", 00:06:06.026 "tpoint_mask": "0x0" 00:06:06.026 }, 00:06:06.026 "scsi": { 00:06:06.026 "mask": "0x4", 00:06:06.026 "tpoint_mask": "0x0" 00:06:06.026 }, 00:06:06.026 "bdev": { 00:06:06.026 "mask": "0x8", 00:06:06.026 "tpoint_mask": "0xffffffffffffffff" 00:06:06.026 }, 00:06:06.026 "nvmf_rdma": { 00:06:06.026 "mask": "0x10", 00:06:06.026 "tpoint_mask": "0x0" 00:06:06.026 }, 00:06:06.026 "nvmf_tcp": { 00:06:06.026 "mask": "0x20", 00:06:06.026 "tpoint_mask": "0x0" 00:06:06.026 }, 00:06:06.026 "ftl": { 00:06:06.026 "mask": "0x40", 00:06:06.026 "tpoint_mask": "0x0" 00:06:06.026 }, 00:06:06.026 "blobfs": { 00:06:06.026 "mask": "0x80", 00:06:06.026 "tpoint_mask": "0x0" 00:06:06.026 }, 00:06:06.026 "dsa": { 00:06:06.026 "mask": "0x200", 00:06:06.026 "tpoint_mask": "0x0" 00:06:06.026 }, 00:06:06.026 "thread": { 00:06:06.026 "mask": "0x400", 00:06:06.026 "tpoint_mask": "0x0" 00:06:06.026 }, 00:06:06.026 "nvme_pcie": { 00:06:06.026 "mask": "0x800", 00:06:06.026 "tpoint_mask": "0x0" 00:06:06.026 }, 00:06:06.026 "iaa": { 00:06:06.026 "mask": "0x1000", 00:06:06.026 "tpoint_mask": "0x0" 00:06:06.026 }, 00:06:06.026 "nvme_tcp": { 00:06:06.026 "mask": "0x2000", 00:06:06.026 "tpoint_mask": "0x0" 00:06:06.026 }, 00:06:06.026 "bdev_nvme": { 00:06:06.026 "mask": "0x4000", 00:06:06.026 "tpoint_mask": "0x0" 00:06:06.026 }, 00:06:06.026 "sock": { 00:06:06.026 "mask": "0x8000", 00:06:06.026 "tpoint_mask": "0x0" 00:06:06.026 }, 00:06:06.026 "blob": { 00:06:06.026 "mask": "0x10000", 00:06:06.026 "tpoint_mask": "0x0" 00:06:06.026 }, 00:06:06.026 "bdev_raid": { 00:06:06.026 "mask": "0x20000", 00:06:06.026 "tpoint_mask": "0x0" 00:06:06.026 }, 00:06:06.026 "scheduler": { 00:06:06.026 "mask": "0x40000", 00:06:06.026 "tpoint_mask": "0x0" 00:06:06.026 } 00:06:06.026 }' 00:06:06.026 12:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:06.026 12:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:06.026 12:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:06.026 12:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:06.026 12:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:06.026 12:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:06.026 12:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:06.284 12:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:06.284 12:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:06.284 12:27:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:06.284 00:06:06.284 real 0m0.206s 00:06:06.284 user 0m0.168s 00:06:06.284 sys 0m0.033s 00:06:06.284 12:27:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.284 12:27:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.284 ************************************ 00:06:06.284 END TEST rpc_trace_cmd_test 00:06:06.284 ************************************ 00:06:06.284 12:27:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:06.284 12:27:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:06.284 12:27:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:06.284 12:27:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.284 12:27:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.284 12:27:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.284 ************************************ 00:06:06.284 START TEST rpc_daemon_integrity 00:06:06.284 ************************************ 00:06:06.284 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:06.284 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:06.284 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.284 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.284 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.284 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:06.284 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:06.284 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:06.284 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:06.284 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.284 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.284 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.284 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:06.285 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:06.285 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.285 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.285 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.285 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:06.285 { 00:06:06.285 "name": "Malloc2", 00:06:06.285 "aliases": [ 00:06:06.285 "602d5b01-512e-44c5-9ee4-2e3391b97418" 00:06:06.285 ], 00:06:06.285 "product_name": "Malloc disk", 00:06:06.285 "block_size": 512, 00:06:06.285 "num_blocks": 16384, 00:06:06.285 "uuid": "602d5b01-512e-44c5-9ee4-2e3391b97418", 00:06:06.285 "assigned_rate_limits": { 00:06:06.285 "rw_ios_per_sec": 0, 00:06:06.285 "rw_mbytes_per_sec": 0, 00:06:06.285 "r_mbytes_per_sec": 0, 00:06:06.285 "w_mbytes_per_sec": 0 00:06:06.285 }, 00:06:06.285 "claimed": false, 00:06:06.285 "zoned": false, 00:06:06.285 "supported_io_types": { 00:06:06.285 "read": true, 00:06:06.285 "write": true, 00:06:06.285 "unmap": true, 00:06:06.285 "flush": true, 00:06:06.285 "reset": true, 00:06:06.285 "nvme_admin": false, 00:06:06.285 "nvme_io": false, 00:06:06.285 "nvme_io_md": false, 00:06:06.285 "write_zeroes": true, 00:06:06.285 "zcopy": true, 00:06:06.285 "get_zone_info": false, 00:06:06.285 "zone_management": false, 00:06:06.285 "zone_append": false, 00:06:06.285 "compare": false, 00:06:06.285 "compare_and_write": false, 00:06:06.285 "abort": true, 00:06:06.285 "seek_hole": false, 00:06:06.285 "seek_data": false, 00:06:06.285 "copy": true, 00:06:06.285 "nvme_iov_md": false 00:06:06.285 }, 00:06:06.285 "memory_domains": [ 00:06:06.285 { 00:06:06.285 "dma_device_id": "system", 00:06:06.285 "dma_device_type": 1 00:06:06.285 }, 00:06:06.285 { 00:06:06.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.285 "dma_device_type": 2 00:06:06.285 } 00:06:06.285 ], 00:06:06.285 "driver_specific": {} 00:06:06.285 } 00:06:06.285 ]' 00:06:06.285 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:06.285 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:06.285 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:06.285 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.285 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.285 [2024-11-15 12:27:46.625736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:06.285 [2024-11-15 12:27:46.625772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:06.285 [2024-11-15 12:27:46.625789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4f401d0 00:06:06.285 [2024-11-15 12:27:46.625799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:06.285 [2024-11-15 12:27:46.626783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:06.285 [2024-11-15 12:27:46.626808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:06.543 Passthru0 00:06:06.543 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.543 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:06.543 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.543 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.543 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.543 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:06.543 { 00:06:06.543 "name": "Malloc2", 00:06:06.543 "aliases": [ 00:06:06.543 "602d5b01-512e-44c5-9ee4-2e3391b97418" 00:06:06.543 ], 00:06:06.543 "product_name": "Malloc disk", 00:06:06.543 "block_size": 512, 00:06:06.543 "num_blocks": 16384, 00:06:06.543 "uuid": "602d5b01-512e-44c5-9ee4-2e3391b97418", 00:06:06.543 "assigned_rate_limits": { 00:06:06.543 "rw_ios_per_sec": 0, 00:06:06.543 "rw_mbytes_per_sec": 0, 00:06:06.543 "r_mbytes_per_sec": 0, 00:06:06.543 "w_mbytes_per_sec": 0 00:06:06.543 }, 00:06:06.543 "claimed": true, 00:06:06.543 "claim_type": "exclusive_write", 00:06:06.543 "zoned": false, 00:06:06.543 "supported_io_types": { 00:06:06.543 "read": true, 00:06:06.543 "write": true, 00:06:06.543 "unmap": true, 00:06:06.543 "flush": true, 00:06:06.543 "reset": true, 00:06:06.543 "nvme_admin": false, 00:06:06.543 "nvme_io": false, 00:06:06.543 "nvme_io_md": false, 00:06:06.543 "write_zeroes": true, 00:06:06.543 "zcopy": true, 00:06:06.543 "get_zone_info": false, 00:06:06.543 "zone_management": false, 00:06:06.543 "zone_append": false, 00:06:06.543 "compare": false, 00:06:06.543 "compare_and_write": false, 00:06:06.543 "abort": true, 00:06:06.543 "seek_hole": false, 00:06:06.543 "seek_data": false, 00:06:06.543 "copy": true, 00:06:06.543 "nvme_iov_md": false 00:06:06.543 }, 00:06:06.543 "memory_domains": [ 00:06:06.543 { 00:06:06.543 "dma_device_id": "system", 00:06:06.543 "dma_device_type": 1 00:06:06.543 }, 00:06:06.543 { 00:06:06.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.543 "dma_device_type": 2 00:06:06.543 } 00:06:06.543 ], 00:06:06.543 "driver_specific": {} 00:06:06.543 }, 00:06:06.543 { 00:06:06.543 "name": "Passthru0", 00:06:06.543 "aliases": [ 00:06:06.543 "1dc8e3b5-5d49-5141-83d4-ffc4059c6209" 00:06:06.543 ], 00:06:06.543 "product_name": "passthru", 00:06:06.543 "block_size": 512, 00:06:06.543 "num_blocks": 16384, 00:06:06.543 "uuid": "1dc8e3b5-5d49-5141-83d4-ffc4059c6209", 00:06:06.543 "assigned_rate_limits": { 00:06:06.543 "rw_ios_per_sec": 0, 00:06:06.543 "rw_mbytes_per_sec": 0, 00:06:06.543 "r_mbytes_per_sec": 0, 00:06:06.543 "w_mbytes_per_sec": 0 00:06:06.543 }, 00:06:06.543 "claimed": false, 00:06:06.543 "zoned": false, 00:06:06.543 "supported_io_types": { 00:06:06.543 "read": true, 00:06:06.543 "write": true, 00:06:06.543 "unmap": true, 00:06:06.543 "flush": true, 00:06:06.543 "reset": true, 00:06:06.543 "nvme_admin": false, 00:06:06.543 "nvme_io": false, 00:06:06.543 "nvme_io_md": false, 00:06:06.543 "write_zeroes": true, 00:06:06.543 "zcopy": true, 00:06:06.543 "get_zone_info": false, 00:06:06.543 "zone_management": false, 00:06:06.543 "zone_append": false, 00:06:06.543 "compare": false, 00:06:06.543 "compare_and_write": false, 00:06:06.543 "abort": true, 00:06:06.543 "seek_hole": false, 00:06:06.543 "seek_data": false, 00:06:06.543 "copy": true, 00:06:06.543 "nvme_iov_md": false 00:06:06.543 }, 00:06:06.543 "memory_domains": [ 00:06:06.543 { 00:06:06.543 "dma_device_id": "system", 00:06:06.543 "dma_device_type": 1 00:06:06.543 }, 00:06:06.543 { 00:06:06.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.543 "dma_device_type": 2 00:06:06.544 } 00:06:06.544 ], 00:06:06.544 "driver_specific": { 00:06:06.544 "passthru": { 00:06:06.544 "name": "Passthru0", 00:06:06.544 "base_bdev_name": "Malloc2" 00:06:06.544 } 00:06:06.544 } 00:06:06.544 } 00:06:06.544 ]' 00:06:06.544 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:06.544 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:06.544 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:06.544 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.544 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.544 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.544 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:06.544 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.544 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.544 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.544 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:06.544 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.544 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.544 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.544 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:06.544 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:06.544 12:27:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:06.544 00:06:06.544 real 0m0.271s 00:06:06.544 user 0m0.179s 00:06:06.544 sys 0m0.045s 00:06:06.544 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.544 12:27:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.544 ************************************ 00:06:06.544 END TEST rpc_daemon_integrity 00:06:06.544 ************************************ 00:06:06.544 12:27:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:06.544 12:27:46 rpc -- rpc/rpc.sh@84 -- # killprocess 659154 00:06:06.544 12:27:46 rpc -- common/autotest_common.sh@954 -- # '[' -z 659154 ']' 00:06:06.544 12:27:46 rpc -- common/autotest_common.sh@958 -- # kill -0 659154 00:06:06.544 12:27:46 rpc -- common/autotest_common.sh@959 -- # uname 00:06:06.544 12:27:46 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.544 12:27:46 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 659154 00:06:06.544 12:27:46 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.544 12:27:46 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.544 12:27:46 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 659154' 00:06:06.544 killing process with pid 659154 00:06:06.544 12:27:46 rpc -- common/autotest_common.sh@973 -- # kill 659154 00:06:06.544 12:27:46 rpc -- common/autotest_common.sh@978 -- # wait 659154 00:06:07.110 00:06:07.110 real 0m2.172s 00:06:07.110 user 0m2.716s 00:06:07.110 sys 0m0.807s 00:06:07.110 12:27:47 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.110 12:27:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.110 ************************************ 00:06:07.110 END TEST rpc 00:06:07.110 ************************************ 00:06:07.110 12:27:47 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:07.110 12:27:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.110 12:27:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.110 12:27:47 -- common/autotest_common.sh@10 -- # set +x 00:06:07.110 ************************************ 00:06:07.110 START TEST skip_rpc 00:06:07.110 ************************************ 00:06:07.110 12:27:47 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:07.110 * Looking for test storage... 00:06:07.110 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:07.110 12:27:47 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.110 12:27:47 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.110 12:27:47 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.110 12:27:47 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.110 12:27:47 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.110 12:27:47 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.110 12:27:47 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.110 12:27:47 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.110 12:27:47 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.110 12:27:47 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.110 12:27:47 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.110 12:27:47 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.111 12:27:47 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.111 12:27:47 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.111 12:27:47 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.111 12:27:47 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:07.111 12:27:47 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:07.111 12:27:47 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.111 12:27:47 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.111 12:27:47 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:07.111 12:27:47 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:07.111 12:27:47 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.111 12:27:47 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:07.111 12:27:47 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.111 12:27:47 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:07.111 12:27:47 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:07.111 12:27:47 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.111 12:27:47 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:07.111 12:27:47 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.111 12:27:47 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.111 12:27:47 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.111 12:27:47 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:07.111 12:27:47 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.111 12:27:47 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.111 --rc genhtml_branch_coverage=1 00:06:07.111 --rc genhtml_function_coverage=1 00:06:07.111 --rc genhtml_legend=1 00:06:07.111 --rc geninfo_all_blocks=1 00:06:07.111 --rc geninfo_unexecuted_blocks=1 00:06:07.111 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:07.111 ' 00:06:07.111 12:27:47 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.111 --rc genhtml_branch_coverage=1 00:06:07.111 --rc genhtml_function_coverage=1 00:06:07.111 --rc genhtml_legend=1 00:06:07.111 --rc geninfo_all_blocks=1 00:06:07.111 --rc geninfo_unexecuted_blocks=1 00:06:07.111 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:07.111 ' 00:06:07.111 12:27:47 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.111 --rc genhtml_branch_coverage=1 00:06:07.111 --rc genhtml_function_coverage=1 00:06:07.111 --rc genhtml_legend=1 00:06:07.111 --rc geninfo_all_blocks=1 00:06:07.111 --rc geninfo_unexecuted_blocks=1 00:06:07.111 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:07.111 ' 00:06:07.111 12:27:47 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.111 --rc genhtml_branch_coverage=1 00:06:07.111 --rc genhtml_function_coverage=1 00:06:07.111 --rc genhtml_legend=1 00:06:07.111 --rc geninfo_all_blocks=1 00:06:07.111 --rc geninfo_unexecuted_blocks=1 00:06:07.111 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:07.111 ' 00:06:07.111 12:27:47 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:07.369 12:27:47 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:07.369 12:27:47 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:07.369 12:27:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.369 12:27:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.369 12:27:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.369 ************************************ 00:06:07.369 START TEST skip_rpc 00:06:07.369 ************************************ 00:06:07.369 12:27:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:07.369 12:27:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=659619 00:06:07.369 12:27:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.369 12:27:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:07.369 12:27:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:07.369 [2024-11-15 12:27:47.518096] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:07.369 [2024-11-15 12:27:47.518166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659619 ] 00:06:07.369 [2024-11-15 12:27:47.602855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.369 [2024-11-15 12:27:47.648758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 659619 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 659619 ']' 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 659619 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 659619 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 659619' 00:06:12.630 killing process with pid 659619 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 659619 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 659619 00:06:12.630 00:06:12.630 real 0m5.391s 00:06:12.630 user 0m5.107s 00:06:12.630 sys 0m0.329s 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.630 12:27:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.630 ************************************ 00:06:12.630 END TEST skip_rpc 00:06:12.630 ************************************ 00:06:12.630 12:27:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:12.630 12:27:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.630 12:27:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.631 12:27:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.631 ************************************ 00:06:12.631 START TEST skip_rpc_with_json 00:06:12.631 ************************************ 00:06:12.631 12:27:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:12.631 12:27:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:12.631 12:27:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=660349 00:06:12.631 12:27:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.631 12:27:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.631 12:27:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 660349 00:06:12.631 12:27:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 660349 ']' 00:06:12.631 12:27:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.631 12:27:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.631 12:27:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.631 12:27:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.631 12:27:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.889 [2024-11-15 12:27:52.995478] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:12.889 [2024-11-15 12:27:52.995557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid660349 ] 00:06:12.889 [2024-11-15 12:27:53.084772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.889 [2024-11-15 12:27:53.133030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.148 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.148 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:13.148 12:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:13.148 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.148 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.148 [2024-11-15 12:27:53.364460] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:13.148 request: 00:06:13.148 { 00:06:13.148 "trtype": "tcp", 00:06:13.148 "method": "nvmf_get_transports", 00:06:13.148 "req_id": 1 00:06:13.148 } 00:06:13.148 Got JSON-RPC error response 00:06:13.148 response: 00:06:13.148 { 00:06:13.148 "code": -19, 00:06:13.148 "message": "No such device" 00:06:13.148 } 00:06:13.148 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:13.148 12:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:13.148 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.148 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.148 [2024-11-15 12:27:53.372535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.148 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.148 12:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:13.148 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.148 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.407 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.407 12:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:13.407 { 00:06:13.407 "subsystems": [ 00:06:13.407 { 00:06:13.407 "subsystem": "scheduler", 00:06:13.407 "config": [ 00:06:13.407 { 00:06:13.407 "method": "framework_set_scheduler", 00:06:13.407 "params": { 00:06:13.407 "name": "static" 00:06:13.407 } 00:06:13.407 } 00:06:13.407 ] 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "subsystem": "vmd", 00:06:13.407 "config": [] 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "subsystem": "sock", 00:06:13.407 "config": [ 00:06:13.407 { 00:06:13.407 "method": "sock_set_default_impl", 00:06:13.407 "params": { 00:06:13.407 "impl_name": "posix" 00:06:13.407 } 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "method": "sock_impl_set_options", 00:06:13.407 "params": { 00:06:13.407 "impl_name": "ssl", 00:06:13.407 "recv_buf_size": 4096, 00:06:13.407 "send_buf_size": 4096, 00:06:13.407 "enable_recv_pipe": true, 00:06:13.407 "enable_quickack": false, 00:06:13.407 "enable_placement_id": 0, 00:06:13.407 "enable_zerocopy_send_server": true, 00:06:13.407 "enable_zerocopy_send_client": false, 00:06:13.407 "zerocopy_threshold": 0, 00:06:13.407 "tls_version": 0, 00:06:13.407 "enable_ktls": false 00:06:13.407 } 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "method": "sock_impl_set_options", 00:06:13.407 "params": { 00:06:13.407 "impl_name": "posix", 00:06:13.407 "recv_buf_size": 2097152, 00:06:13.407 "send_buf_size": 2097152, 00:06:13.407 "enable_recv_pipe": true, 00:06:13.407 "enable_quickack": false, 00:06:13.407 "enable_placement_id": 0, 00:06:13.407 "enable_zerocopy_send_server": true, 00:06:13.407 "enable_zerocopy_send_client": false, 00:06:13.407 "zerocopy_threshold": 0, 00:06:13.407 "tls_version": 0, 00:06:13.407 "enable_ktls": false 00:06:13.407 } 00:06:13.407 } 00:06:13.407 ] 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "subsystem": "iobuf", 00:06:13.407 "config": [ 00:06:13.407 { 00:06:13.407 "method": "iobuf_set_options", 00:06:13.407 "params": { 00:06:13.407 "small_pool_count": 8192, 00:06:13.407 "large_pool_count": 1024, 00:06:13.407 "small_bufsize": 8192, 00:06:13.407 "large_bufsize": 135168, 00:06:13.407 "enable_numa": false 00:06:13.407 } 00:06:13.407 } 00:06:13.407 ] 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "subsystem": "keyring", 00:06:13.407 "config": [] 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "subsystem": "vfio_user_target", 00:06:13.407 "config": null 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "subsystem": "fsdev", 00:06:13.407 "config": [ 00:06:13.407 { 00:06:13.407 "method": "fsdev_set_opts", 00:06:13.407 "params": { 00:06:13.407 "fsdev_io_pool_size": 65535, 00:06:13.407 "fsdev_io_cache_size": 256 00:06:13.407 } 00:06:13.407 } 00:06:13.407 ] 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "subsystem": "accel", 00:06:13.407 "config": [ 00:06:13.407 { 00:06:13.407 "method": "accel_set_options", 00:06:13.407 "params": { 00:06:13.407 "small_cache_size": 128, 00:06:13.407 "large_cache_size": 16, 00:06:13.407 "task_count": 2048, 00:06:13.407 "sequence_count": 2048, 00:06:13.407 "buf_count": 2048 00:06:13.407 } 00:06:13.407 } 00:06:13.407 ] 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "subsystem": "bdev", 00:06:13.407 "config": [ 00:06:13.407 { 00:06:13.407 "method": "bdev_set_options", 00:06:13.407 "params": { 00:06:13.407 "bdev_io_pool_size": 65535, 00:06:13.407 "bdev_io_cache_size": 256, 00:06:13.407 "bdev_auto_examine": true, 00:06:13.407 "iobuf_small_cache_size": 128, 00:06:13.407 "iobuf_large_cache_size": 16 00:06:13.407 } 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "method": "bdev_raid_set_options", 00:06:13.407 "params": { 00:06:13.407 "process_window_size_kb": 1024, 00:06:13.407 "process_max_bandwidth_mb_sec": 0 00:06:13.407 } 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "method": "bdev_nvme_set_options", 00:06:13.407 "params": { 00:06:13.407 "action_on_timeout": "none", 00:06:13.407 "timeout_us": 0, 00:06:13.407 "timeout_admin_us": 0, 00:06:13.407 "keep_alive_timeout_ms": 10000, 00:06:13.407 "arbitration_burst": 0, 00:06:13.407 "low_priority_weight": 0, 00:06:13.407 "medium_priority_weight": 0, 00:06:13.407 "high_priority_weight": 0, 00:06:13.407 "nvme_adminq_poll_period_us": 10000, 00:06:13.407 "nvme_ioq_poll_period_us": 0, 00:06:13.407 "io_queue_requests": 0, 00:06:13.407 "delay_cmd_submit": true, 00:06:13.407 "transport_retry_count": 4, 00:06:13.407 "bdev_retry_count": 3, 00:06:13.407 "transport_ack_timeout": 0, 00:06:13.407 "ctrlr_loss_timeout_sec": 0, 00:06:13.407 "reconnect_delay_sec": 0, 00:06:13.407 "fast_io_fail_timeout_sec": 0, 00:06:13.407 "disable_auto_failback": false, 00:06:13.407 "generate_uuids": false, 00:06:13.407 "transport_tos": 0, 00:06:13.407 "nvme_error_stat": false, 00:06:13.407 "rdma_srq_size": 0, 00:06:13.407 "io_path_stat": false, 00:06:13.407 "allow_accel_sequence": false, 00:06:13.407 "rdma_max_cq_size": 0, 00:06:13.407 "rdma_cm_event_timeout_ms": 0, 00:06:13.407 "dhchap_digests": [ 00:06:13.407 "sha256", 00:06:13.407 "sha384", 00:06:13.407 "sha512" 00:06:13.407 ], 00:06:13.407 "dhchap_dhgroups": [ 00:06:13.407 "null", 00:06:13.407 "ffdhe2048", 00:06:13.407 "ffdhe3072", 00:06:13.407 "ffdhe4096", 00:06:13.407 "ffdhe6144", 00:06:13.407 "ffdhe8192" 00:06:13.407 ] 00:06:13.407 } 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "method": "bdev_nvme_set_hotplug", 00:06:13.407 "params": { 00:06:13.407 "period_us": 100000, 00:06:13.407 "enable": false 00:06:13.407 } 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "method": "bdev_iscsi_set_options", 00:06:13.407 "params": { 00:06:13.407 "timeout_sec": 30 00:06:13.407 } 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "method": "bdev_wait_for_examine" 00:06:13.407 } 00:06:13.407 ] 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "subsystem": "nvmf", 00:06:13.407 "config": [ 00:06:13.407 { 00:06:13.407 "method": "nvmf_set_config", 00:06:13.407 "params": { 00:06:13.407 "discovery_filter": "match_any", 00:06:13.407 "admin_cmd_passthru": { 00:06:13.407 "identify_ctrlr": false 00:06:13.407 }, 00:06:13.407 "dhchap_digests": [ 00:06:13.407 "sha256", 00:06:13.407 "sha384", 00:06:13.407 "sha512" 00:06:13.407 ], 00:06:13.407 "dhchap_dhgroups": [ 00:06:13.407 "null", 00:06:13.407 "ffdhe2048", 00:06:13.407 "ffdhe3072", 00:06:13.407 "ffdhe4096", 00:06:13.407 "ffdhe6144", 00:06:13.407 "ffdhe8192" 00:06:13.407 ] 00:06:13.407 } 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "method": "nvmf_set_max_subsystems", 00:06:13.407 "params": { 00:06:13.407 "max_subsystems": 1024 00:06:13.407 } 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "method": "nvmf_set_crdt", 00:06:13.407 "params": { 00:06:13.407 "crdt1": 0, 00:06:13.407 "crdt2": 0, 00:06:13.407 "crdt3": 0 00:06:13.407 } 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "method": "nvmf_create_transport", 00:06:13.407 "params": { 00:06:13.407 "trtype": "TCP", 00:06:13.407 "max_queue_depth": 128, 00:06:13.407 "max_io_qpairs_per_ctrlr": 127, 00:06:13.407 "in_capsule_data_size": 4096, 00:06:13.407 "max_io_size": 131072, 00:06:13.407 "io_unit_size": 131072, 00:06:13.407 "max_aq_depth": 128, 00:06:13.407 "num_shared_buffers": 511, 00:06:13.407 "buf_cache_size": 4294967295, 00:06:13.407 "dif_insert_or_strip": false, 00:06:13.407 "zcopy": false, 00:06:13.408 "c2h_success": true, 00:06:13.408 "sock_priority": 0, 00:06:13.408 "abort_timeout_sec": 1, 00:06:13.408 "ack_timeout": 0, 00:06:13.408 "data_wr_pool_size": 0 00:06:13.408 } 00:06:13.408 } 00:06:13.408 ] 00:06:13.408 }, 00:06:13.408 { 00:06:13.408 "subsystem": "nbd", 00:06:13.408 "config": [] 00:06:13.408 }, 00:06:13.408 { 00:06:13.408 "subsystem": "ublk", 00:06:13.408 "config": [] 00:06:13.408 }, 00:06:13.408 { 00:06:13.408 "subsystem": "vhost_blk", 00:06:13.408 "config": [] 00:06:13.408 }, 00:06:13.408 { 00:06:13.408 "subsystem": "scsi", 00:06:13.408 "config": null 00:06:13.408 }, 00:06:13.408 { 00:06:13.408 "subsystem": "iscsi", 00:06:13.408 "config": [ 00:06:13.408 { 00:06:13.408 "method": "iscsi_set_options", 00:06:13.408 "params": { 00:06:13.408 "node_base": "iqn.2016-06.io.spdk", 00:06:13.408 "max_sessions": 128, 00:06:13.408 "max_connections_per_session": 2, 00:06:13.408 "max_queue_depth": 64, 00:06:13.408 "default_time2wait": 2, 00:06:13.408 "default_time2retain": 20, 00:06:13.408 "first_burst_length": 8192, 00:06:13.408 "immediate_data": true, 00:06:13.408 "allow_duplicated_isid": false, 00:06:13.408 "error_recovery_level": 0, 00:06:13.408 "nop_timeout": 60, 00:06:13.408 "nop_in_interval": 30, 00:06:13.408 "disable_chap": false, 00:06:13.408 "require_chap": false, 00:06:13.408 "mutual_chap": false, 00:06:13.408 "chap_group": 0, 00:06:13.408 "max_large_datain_per_connection": 64, 00:06:13.408 "max_r2t_per_connection": 4, 00:06:13.408 "pdu_pool_size": 36864, 00:06:13.408 "immediate_data_pool_size": 16384, 00:06:13.408 "data_out_pool_size": 2048 00:06:13.408 } 00:06:13.408 } 00:06:13.408 ] 00:06:13.408 }, 00:06:13.408 { 00:06:13.408 "subsystem": "vhost_scsi", 00:06:13.408 "config": [] 00:06:13.408 } 00:06:13.408 ] 00:06:13.408 } 00:06:13.408 12:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:13.408 12:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 660349 00:06:13.408 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 660349 ']' 00:06:13.408 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 660349 00:06:13.408 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:13.408 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.408 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 660349 00:06:13.408 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.408 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.408 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 660349' 00:06:13.408 killing process with pid 660349 00:06:13.408 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 660349 00:06:13.408 12:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 660349 00:06:13.666 12:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=660530 00:06:13.666 12:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:13.666 12:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:18.927 12:27:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 660530 00:06:18.927 12:27:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 660530 ']' 00:06:18.927 12:27:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 660530 00:06:18.927 12:27:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:18.927 12:27:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.927 12:27:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 660530 00:06:18.927 12:27:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.927 12:27:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.927 12:27:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 660530' 00:06:18.927 killing process with pid 660530 00:06:18.927 12:27:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 660530 00:06:18.927 12:27:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 660530 00:06:19.185 12:27:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:19.185 12:27:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:19.185 00:06:19.185 real 0m6.351s 00:06:19.185 user 0m5.982s 00:06:19.185 sys 0m0.670s 00:06:19.185 12:27:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.186 12:27:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:19.186 ************************************ 00:06:19.186 END TEST skip_rpc_with_json 00:06:19.186 ************************************ 00:06:19.186 12:27:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:19.186 12:27:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.186 12:27:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.186 12:27:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.186 ************************************ 00:06:19.186 START TEST skip_rpc_with_delay 00:06:19.186 ************************************ 00:06:19.186 12:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:19.186 12:27:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.186 12:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:19.186 12:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.186 12:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.186 12:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.186 12:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.186 12:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.186 12:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.186 12:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.186 12:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.186 12:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:19.186 12:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.186 [2024-11-15 12:27:59.426084] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:19.186 12:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:19.186 12:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.186 12:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:19.186 12:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.186 00:06:19.186 real 0m0.043s 00:06:19.186 user 0m0.021s 00:06:19.186 sys 0m0.022s 00:06:19.186 12:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.186 12:27:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:19.186 ************************************ 00:06:19.186 END TEST skip_rpc_with_delay 00:06:19.186 ************************************ 00:06:19.186 12:27:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:19.186 12:27:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:19.186 12:27:59 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:19.186 12:27:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.186 12:27:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.186 12:27:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.186 ************************************ 00:06:19.186 START TEST exit_on_failed_rpc_init 00:06:19.186 ************************************ 00:06:19.186 12:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:19.186 12:27:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=661279 00:06:19.186 12:27:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 661279 00:06:19.186 12:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 661279 ']' 00:06:19.186 12:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.186 12:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.186 12:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.186 12:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.186 12:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:19.186 12:27:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.445 [2024-11-15 12:27:59.531376] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:19.445 [2024-11-15 12:27:59.531436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661279 ] 00:06:19.445 [2024-11-15 12:27:59.617737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.445 [2024-11-15 12:27:59.666917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.704 12:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.704 12:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:19.704 12:27:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.704 12:27:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.704 12:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:19.704 12:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.704 12:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.704 12:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.704 12:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.704 12:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.704 12:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.704 12:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.704 12:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.704 12:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:19.704 12:27:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.704 [2024-11-15 12:27:59.927150] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:19.704 [2024-11-15 12:27:59.927227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661343 ] 00:06:19.704 [2024-11-15 12:28:00.014106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.962 [2024-11-15 12:28:00.066541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.962 [2024-11-15 12:28:00.066617] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:19.962 [2024-11-15 12:28:00.066630] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:19.962 [2024-11-15 12:28:00.066638] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.962 12:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:19.962 12:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.962 12:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:19.962 12:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:19.962 12:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:19.962 12:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.962 12:28:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:19.962 12:28:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 661279 00:06:19.962 12:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 661279 ']' 00:06:19.962 12:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 661279 00:06:19.962 12:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:19.962 12:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.962 12:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 661279 00:06:19.962 12:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.962 12:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.962 12:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 661279' 00:06:19.962 killing process with pid 661279 00:06:19.962 12:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 661279 00:06:19.962 12:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 661279 00:06:20.221 00:06:20.221 real 0m0.990s 00:06:20.221 user 0m0.989s 00:06:20.221 sys 0m0.436s 00:06:20.221 12:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.221 12:28:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:20.221 ************************************ 00:06:20.221 END TEST exit_on_failed_rpc_init 00:06:20.221 ************************************ 00:06:20.221 12:28:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:20.221 00:06:20.221 real 0m13.284s 00:06:20.221 user 0m12.325s 00:06:20.221 sys 0m1.779s 00:06:20.221 12:28:00 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.221 12:28:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.221 ************************************ 00:06:20.221 END TEST skip_rpc 00:06:20.221 ************************************ 00:06:20.480 12:28:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:20.480 12:28:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.480 12:28:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.480 12:28:00 -- common/autotest_common.sh@10 -- # set +x 00:06:20.480 ************************************ 00:06:20.480 START TEST rpc_client 00:06:20.480 ************************************ 00:06:20.480 12:28:00 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:20.480 * Looking for test storage... 00:06:20.480 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:06:20.480 12:28:00 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:20.480 12:28:00 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:20.480 12:28:00 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:20.480 12:28:00 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.480 12:28:00 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:20.480 12:28:00 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.480 12:28:00 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:20.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.480 --rc genhtml_branch_coverage=1 00:06:20.480 --rc genhtml_function_coverage=1 00:06:20.480 --rc genhtml_legend=1 00:06:20.480 --rc geninfo_all_blocks=1 00:06:20.480 --rc geninfo_unexecuted_blocks=1 00:06:20.480 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.480 ' 00:06:20.480 12:28:00 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:20.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.480 --rc genhtml_branch_coverage=1 00:06:20.480 --rc genhtml_function_coverage=1 00:06:20.480 --rc genhtml_legend=1 00:06:20.480 --rc geninfo_all_blocks=1 00:06:20.480 --rc geninfo_unexecuted_blocks=1 00:06:20.480 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.480 ' 00:06:20.480 12:28:00 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:20.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.480 --rc genhtml_branch_coverage=1 00:06:20.480 --rc genhtml_function_coverage=1 00:06:20.480 --rc genhtml_legend=1 00:06:20.480 --rc geninfo_all_blocks=1 00:06:20.480 --rc geninfo_unexecuted_blocks=1 00:06:20.480 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.480 ' 00:06:20.480 12:28:00 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:20.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.480 --rc genhtml_branch_coverage=1 00:06:20.480 --rc genhtml_function_coverage=1 00:06:20.480 --rc genhtml_legend=1 00:06:20.480 --rc geninfo_all_blocks=1 00:06:20.480 --rc geninfo_unexecuted_blocks=1 00:06:20.480 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.480 ' 00:06:20.480 12:28:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:20.480 OK 00:06:20.480 12:28:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:20.480 00:06:20.480 real 0m0.196s 00:06:20.480 user 0m0.102s 00:06:20.480 sys 0m0.106s 00:06:20.480 12:28:00 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.480 12:28:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:20.480 ************************************ 00:06:20.480 END TEST rpc_client 00:06:20.480 ************************************ 00:06:20.739 12:28:00 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:20.739 12:28:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.739 12:28:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.739 12:28:00 -- common/autotest_common.sh@10 -- # set +x 00:06:20.739 ************************************ 00:06:20.739 START TEST json_config 00:06:20.739 ************************************ 00:06:20.739 12:28:00 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:20.739 12:28:00 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:20.739 12:28:00 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:20.739 12:28:00 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:20.739 12:28:01 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:20.739 12:28:01 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.739 12:28:01 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.739 12:28:01 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.739 12:28:01 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.739 12:28:01 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.739 12:28:01 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.739 12:28:01 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.739 12:28:01 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.739 12:28:01 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.739 12:28:01 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.739 12:28:01 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.739 12:28:01 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:20.739 12:28:01 json_config -- scripts/common.sh@345 -- # : 1 00:06:20.739 12:28:01 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.739 12:28:01 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.739 12:28:01 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:20.739 12:28:01 json_config -- scripts/common.sh@353 -- # local d=1 00:06:20.739 12:28:01 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.739 12:28:01 json_config -- scripts/common.sh@355 -- # echo 1 00:06:20.739 12:28:01 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.739 12:28:01 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:20.739 12:28:01 json_config -- scripts/common.sh@353 -- # local d=2 00:06:20.739 12:28:01 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.739 12:28:01 json_config -- scripts/common.sh@355 -- # echo 2 00:06:20.739 12:28:01 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.739 12:28:01 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.739 12:28:01 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.739 12:28:01 json_config -- scripts/common.sh@368 -- # return 0 00:06:20.739 12:28:01 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.739 12:28:01 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:20.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.739 --rc genhtml_branch_coverage=1 00:06:20.739 --rc genhtml_function_coverage=1 00:06:20.739 --rc genhtml_legend=1 00:06:20.739 --rc geninfo_all_blocks=1 00:06:20.739 --rc geninfo_unexecuted_blocks=1 00:06:20.739 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.739 ' 00:06:20.739 12:28:01 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:20.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.739 --rc genhtml_branch_coverage=1 00:06:20.739 --rc genhtml_function_coverage=1 00:06:20.739 --rc genhtml_legend=1 00:06:20.739 --rc geninfo_all_blocks=1 00:06:20.739 --rc geninfo_unexecuted_blocks=1 00:06:20.739 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.739 ' 00:06:20.739 12:28:01 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:20.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.739 --rc genhtml_branch_coverage=1 00:06:20.739 --rc genhtml_function_coverage=1 00:06:20.739 --rc genhtml_legend=1 00:06:20.740 --rc geninfo_all_blocks=1 00:06:20.740 --rc geninfo_unexecuted_blocks=1 00:06:20.740 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.740 ' 00:06:20.740 12:28:01 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:20.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.740 --rc genhtml_branch_coverage=1 00:06:20.740 --rc genhtml_function_coverage=1 00:06:20.740 --rc genhtml_legend=1 00:06:20.740 --rc geninfo_all_blocks=1 00:06:20.740 --rc geninfo_unexecuted_blocks=1 00:06:20.740 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.740 ' 00:06:20.740 12:28:01 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.740 12:28:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:20.740 12:28:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.740 12:28:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.740 12:28:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.740 12:28:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.740 12:28:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.740 12:28:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.740 12:28:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.740 12:28:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.740 12:28:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.740 12:28:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.999 12:28:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:06:20.999 12:28:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:06:20.999 12:28:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.999 12:28:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.999 12:28:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:20.999 12:28:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.999 12:28:01 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:20.999 12:28:01 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.999 12:28:01 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.999 12:28:01 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.999 12:28:01 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.999 12:28:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.999 12:28:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.999 12:28:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.999 12:28:01 json_config -- paths/export.sh@5 -- # export PATH 00:06:20.999 12:28:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.999 12:28:01 json_config -- nvmf/common.sh@51 -- # : 0 00:06:20.999 12:28:01 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.999 12:28:01 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.999 12:28:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.999 12:28:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.999 12:28:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.999 12:28:01 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.999 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.999 12:28:01 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.999 12:28:01 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.999 12:28:01 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.999 12:28:01 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:20.999 12:28:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:20.999 12:28:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:20.999 12:28:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:20.999 12:28:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:20.999 12:28:01 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:20.999 WARNING: No tests are enabled so not running JSON configuration tests 00:06:20.999 12:28:01 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:20.999 00:06:20.999 real 0m0.203s 00:06:20.999 user 0m0.126s 00:06:20.999 sys 0m0.085s 00:06:20.999 12:28:01 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.999 12:28:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.999 ************************************ 00:06:20.999 END TEST json_config 00:06:20.999 ************************************ 00:06:20.999 12:28:01 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:20.999 12:28:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.999 12:28:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.999 12:28:01 -- common/autotest_common.sh@10 -- # set +x 00:06:20.999 ************************************ 00:06:20.999 START TEST json_config_extra_key 00:06:20.999 ************************************ 00:06:20.999 12:28:01 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:20.999 12:28:01 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:20.999 12:28:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:20.999 12:28:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:20.999 12:28:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.999 12:28:01 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:20.999 12:28:01 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.999 12:28:01 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:20.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.999 --rc genhtml_branch_coverage=1 00:06:20.999 --rc genhtml_function_coverage=1 00:06:20.999 --rc genhtml_legend=1 00:06:20.999 --rc geninfo_all_blocks=1 00:06:20.999 --rc geninfo_unexecuted_blocks=1 00:06:20.999 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.999 ' 00:06:20.999 12:28:01 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:20.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.999 --rc genhtml_branch_coverage=1 00:06:20.999 --rc genhtml_function_coverage=1 00:06:20.999 --rc genhtml_legend=1 00:06:20.999 --rc geninfo_all_blocks=1 00:06:20.999 --rc geninfo_unexecuted_blocks=1 00:06:20.999 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.999 ' 00:06:20.999 12:28:01 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:20.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.999 --rc genhtml_branch_coverage=1 00:06:20.999 --rc genhtml_function_coverage=1 00:06:20.999 --rc genhtml_legend=1 00:06:20.999 --rc geninfo_all_blocks=1 00:06:20.999 --rc geninfo_unexecuted_blocks=1 00:06:20.999 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.999 ' 00:06:20.999 12:28:01 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:20.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.999 --rc genhtml_branch_coverage=1 00:06:20.999 --rc genhtml_function_coverage=1 00:06:20.999 --rc genhtml_legend=1 00:06:20.999 --rc geninfo_all_blocks=1 00:06:20.999 --rc geninfo_unexecuted_blocks=1 00:06:20.999 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.999 ' 00:06:20.999 12:28:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:21.258 12:28:01 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:21.258 12:28:01 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.258 12:28:01 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.258 12:28:01 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.258 12:28:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.258 12:28:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.258 12:28:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.258 12:28:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:21.258 12:28:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:21.258 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:21.258 12:28:01 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:21.258 12:28:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:21.259 12:28:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:21.259 12:28:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:21.259 12:28:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:21.259 12:28:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:21.259 12:28:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:21.259 12:28:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:21.259 12:28:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:21.259 12:28:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:21.259 12:28:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:21.259 12:28:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:21.259 INFO: launching applications... 00:06:21.259 12:28:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:21.259 12:28:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:21.259 12:28:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:21.259 12:28:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:21.259 12:28:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:21.259 12:28:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:21.259 12:28:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.259 12:28:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.259 12:28:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=661783 00:06:21.259 12:28:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:21.259 Waiting for target to run... 00:06:21.259 12:28:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 661783 /var/tmp/spdk_tgt.sock 00:06:21.259 12:28:01 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 661783 ']' 00:06:21.259 12:28:01 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:21.259 12:28:01 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:21.259 12:28:01 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.259 12:28:01 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:21.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:21.259 12:28:01 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.259 12:28:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:21.259 [2024-11-15 12:28:01.404188] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:21.259 [2024-11-15 12:28:01.404269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661783 ] 00:06:21.826 [2024-11-15 12:28:01.951122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.826 [2024-11-15 12:28:02.011076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.084 12:28:02 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.084 12:28:02 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:22.084 12:28:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:22.084 00:06:22.084 12:28:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:22.084 INFO: shutting down applications... 00:06:22.084 12:28:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:22.084 12:28:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:22.084 12:28:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:22.084 12:28:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 661783 ]] 00:06:22.084 12:28:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 661783 00:06:22.084 12:28:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:22.084 12:28:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.084 12:28:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 661783 00:06:22.084 12:28:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:22.650 12:28:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:22.650 12:28:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.650 12:28:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 661783 00:06:22.650 12:28:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:22.650 12:28:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:22.650 12:28:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:22.650 12:28:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:22.650 SPDK target shutdown done 00:06:22.650 12:28:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:22.650 Success 00:06:22.650 00:06:22.650 real 0m1.604s 00:06:22.650 user 0m1.109s 00:06:22.650 sys 0m0.699s 00:06:22.650 12:28:02 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.650 12:28:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:22.650 ************************************ 00:06:22.650 END TEST json_config_extra_key 00:06:22.650 ************************************ 00:06:22.650 12:28:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:22.650 12:28:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.650 12:28:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.650 12:28:02 -- common/autotest_common.sh@10 -- # set +x 00:06:22.650 ************************************ 00:06:22.650 START TEST alias_rpc 00:06:22.651 ************************************ 00:06:22.651 12:28:02 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:22.651 * Looking for test storage... 00:06:22.651 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:06:22.651 12:28:02 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.651 12:28:02 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.651 12:28:02 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.909 12:28:03 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.909 12:28:03 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:22.909 12:28:03 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.909 12:28:03 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:22.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.909 --rc genhtml_branch_coverage=1 00:06:22.909 --rc genhtml_function_coverage=1 00:06:22.909 --rc genhtml_legend=1 00:06:22.909 --rc geninfo_all_blocks=1 00:06:22.909 --rc geninfo_unexecuted_blocks=1 00:06:22.909 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:22.909 ' 00:06:22.909 12:28:03 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:22.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.909 --rc genhtml_branch_coverage=1 00:06:22.909 --rc genhtml_function_coverage=1 00:06:22.909 --rc genhtml_legend=1 00:06:22.909 --rc geninfo_all_blocks=1 00:06:22.909 --rc geninfo_unexecuted_blocks=1 00:06:22.909 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:22.909 ' 00:06:22.909 12:28:03 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:22.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.909 --rc genhtml_branch_coverage=1 00:06:22.909 --rc genhtml_function_coverage=1 00:06:22.909 --rc genhtml_legend=1 00:06:22.909 --rc geninfo_all_blocks=1 00:06:22.909 --rc geninfo_unexecuted_blocks=1 00:06:22.909 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:22.909 ' 00:06:22.909 12:28:03 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:22.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.909 --rc genhtml_branch_coverage=1 00:06:22.909 --rc genhtml_function_coverage=1 00:06:22.909 --rc genhtml_legend=1 00:06:22.909 --rc geninfo_all_blocks=1 00:06:22.909 --rc geninfo_unexecuted_blocks=1 00:06:22.909 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:22.909 ' 00:06:22.909 12:28:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:22.909 12:28:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=662035 00:06:22.909 12:28:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 662035 00:06:22.909 12:28:03 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 662035 ']' 00:06:22.909 12:28:03 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.909 12:28:03 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.909 12:28:03 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.909 12:28:03 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.909 12:28:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.909 12:28:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.909 [2024-11-15 12:28:03.070497] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:22.909 [2024-11-15 12:28:03.070579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662035 ] 00:06:22.909 [2024-11-15 12:28:03.157529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.909 [2024-11-15 12:28:03.204426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.169 12:28:03 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.169 12:28:03 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.169 12:28:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:23.428 12:28:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 662035 00:06:23.428 12:28:03 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 662035 ']' 00:06:23.428 12:28:03 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 662035 00:06:23.428 12:28:03 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:23.428 12:28:03 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.428 12:28:03 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 662035 00:06:23.428 12:28:03 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.428 12:28:03 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.428 12:28:03 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 662035' 00:06:23.428 killing process with pid 662035 00:06:23.428 12:28:03 alias_rpc -- common/autotest_common.sh@973 -- # kill 662035 00:06:23.428 12:28:03 alias_rpc -- common/autotest_common.sh@978 -- # wait 662035 00:06:23.686 00:06:23.686 real 0m1.164s 00:06:23.686 user 0m1.113s 00:06:23.686 sys 0m0.480s 00:06:23.686 12:28:04 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.686 12:28:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.686 ************************************ 00:06:23.686 END TEST alias_rpc 00:06:23.686 ************************************ 00:06:23.944 12:28:04 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:23.944 12:28:04 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:23.944 12:28:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.944 12:28:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.944 12:28:04 -- common/autotest_common.sh@10 -- # set +x 00:06:23.944 ************************************ 00:06:23.944 START TEST spdkcli_tcp 00:06:23.944 ************************************ 00:06:23.944 12:28:04 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:23.944 * Looking for test storage... 00:06:23.944 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:06:23.944 12:28:04 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.944 12:28:04 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.944 12:28:04 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.944 12:28:04 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.944 12:28:04 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.944 12:28:04 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.944 12:28:04 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.944 12:28:04 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.944 12:28:04 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.944 12:28:04 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.944 12:28:04 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.944 12:28:04 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.944 12:28:04 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.944 12:28:04 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.944 12:28:04 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.944 12:28:04 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:23.944 12:28:04 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:23.944 12:28:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.203 12:28:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.203 12:28:04 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:24.203 12:28:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:24.203 12:28:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.203 12:28:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:24.203 12:28:04 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.203 12:28:04 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:24.203 12:28:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:24.203 12:28:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.203 12:28:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:24.203 12:28:04 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.203 12:28:04 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.203 12:28:04 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.203 12:28:04 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:24.203 12:28:04 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.203 12:28:04 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.203 --rc genhtml_branch_coverage=1 00:06:24.203 --rc genhtml_function_coverage=1 00:06:24.203 --rc genhtml_legend=1 00:06:24.203 --rc geninfo_all_blocks=1 00:06:24.203 --rc geninfo_unexecuted_blocks=1 00:06:24.203 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:24.203 ' 00:06:24.203 12:28:04 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.203 --rc genhtml_branch_coverage=1 00:06:24.203 --rc genhtml_function_coverage=1 00:06:24.203 --rc genhtml_legend=1 00:06:24.203 --rc geninfo_all_blocks=1 00:06:24.203 --rc geninfo_unexecuted_blocks=1 00:06:24.203 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:24.203 ' 00:06:24.203 12:28:04 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.203 --rc genhtml_branch_coverage=1 00:06:24.203 --rc genhtml_function_coverage=1 00:06:24.203 --rc genhtml_legend=1 00:06:24.203 --rc geninfo_all_blocks=1 00:06:24.203 --rc geninfo_unexecuted_blocks=1 00:06:24.203 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:24.203 ' 00:06:24.203 12:28:04 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.203 --rc genhtml_branch_coverage=1 00:06:24.203 --rc genhtml_function_coverage=1 00:06:24.203 --rc genhtml_legend=1 00:06:24.203 --rc geninfo_all_blocks=1 00:06:24.203 --rc geninfo_unexecuted_blocks=1 00:06:24.203 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:24.203 ' 00:06:24.203 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:06:24.203 12:28:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:24.203 12:28:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:06:24.203 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:24.203 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:24.203 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:24.203 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:24.203 12:28:04 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.203 12:28:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.203 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=662271 00:06:24.203 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:24.203 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 662271 00:06:24.203 12:28:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 662271 ']' 00:06:24.203 12:28:04 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.203 12:28:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.203 12:28:04 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.203 12:28:04 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.203 12:28:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.203 [2024-11-15 12:28:04.338144] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:24.203 [2024-11-15 12:28:04.338219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662271 ] 00:06:24.203 [2024-11-15 12:28:04.422736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.203 [2024-11-15 12:28:04.468229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.203 [2024-11-15 12:28:04.468230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.461 12:28:04 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.461 12:28:04 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:24.461 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=662279 00:06:24.461 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:24.461 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:24.721 [ 00:06:24.721 "spdk_get_version", 00:06:24.721 "rpc_get_methods", 00:06:24.721 "notify_get_notifications", 00:06:24.721 "notify_get_types", 00:06:24.721 "trace_get_info", 00:06:24.721 "trace_get_tpoint_group_mask", 00:06:24.721 "trace_disable_tpoint_group", 00:06:24.721 "trace_enable_tpoint_group", 00:06:24.721 "trace_clear_tpoint_mask", 00:06:24.721 "trace_set_tpoint_mask", 00:06:24.721 "fsdev_set_opts", 00:06:24.721 "fsdev_get_opts", 00:06:24.721 "framework_get_pci_devices", 00:06:24.721 "framework_get_config", 00:06:24.721 "framework_get_subsystems", 00:06:24.721 "vfu_tgt_set_base_path", 00:06:24.721 "keyring_get_keys", 00:06:24.721 "iobuf_get_stats", 00:06:24.721 "iobuf_set_options", 00:06:24.721 "sock_get_default_impl", 00:06:24.721 "sock_set_default_impl", 00:06:24.721 "sock_impl_set_options", 00:06:24.721 "sock_impl_get_options", 00:06:24.721 "vmd_rescan", 00:06:24.721 "vmd_remove_device", 00:06:24.721 "vmd_enable", 00:06:24.721 "accel_get_stats", 00:06:24.721 "accel_set_options", 00:06:24.721 "accel_set_driver", 00:06:24.721 "accel_crypto_key_destroy", 00:06:24.721 "accel_crypto_keys_get", 00:06:24.721 "accel_crypto_key_create", 00:06:24.721 "accel_assign_opc", 00:06:24.721 "accel_get_module_info", 00:06:24.721 "accel_get_opc_assignments", 00:06:24.721 "bdev_get_histogram", 00:06:24.721 "bdev_enable_histogram", 00:06:24.721 "bdev_set_qos_limit", 00:06:24.721 "bdev_set_qd_sampling_period", 00:06:24.721 "bdev_get_bdevs", 00:06:24.721 "bdev_reset_iostat", 00:06:24.721 "bdev_get_iostat", 00:06:24.721 "bdev_examine", 00:06:24.721 "bdev_wait_for_examine", 00:06:24.721 "bdev_set_options", 00:06:24.721 "scsi_get_devices", 00:06:24.721 "thread_set_cpumask", 00:06:24.721 "scheduler_set_options", 00:06:24.721 "framework_get_governor", 00:06:24.721 "framework_get_scheduler", 00:06:24.721 "framework_set_scheduler", 00:06:24.721 "framework_get_reactors", 00:06:24.721 "thread_get_io_channels", 00:06:24.721 "thread_get_pollers", 00:06:24.721 "thread_get_stats", 00:06:24.721 "framework_monitor_context_switch", 00:06:24.721 "spdk_kill_instance", 00:06:24.721 "log_enable_timestamps", 00:06:24.721 "log_get_flags", 00:06:24.721 "log_clear_flag", 00:06:24.721 "log_set_flag", 00:06:24.721 "log_get_level", 00:06:24.721 "log_set_level", 00:06:24.721 "log_get_print_level", 00:06:24.721 "log_set_print_level", 00:06:24.721 "framework_enable_cpumask_locks", 00:06:24.721 "framework_disable_cpumask_locks", 00:06:24.721 "framework_wait_init", 00:06:24.721 "framework_start_init", 00:06:24.721 "virtio_blk_create_transport", 00:06:24.721 "virtio_blk_get_transports", 00:06:24.721 "vhost_controller_set_coalescing", 00:06:24.721 "vhost_get_controllers", 00:06:24.721 "vhost_delete_controller", 00:06:24.721 "vhost_create_blk_controller", 00:06:24.721 "vhost_scsi_controller_remove_target", 00:06:24.721 "vhost_scsi_controller_add_target", 00:06:24.721 "vhost_start_scsi_controller", 00:06:24.721 "vhost_create_scsi_controller", 00:06:24.721 "ublk_recover_disk", 00:06:24.721 "ublk_get_disks", 00:06:24.721 "ublk_stop_disk", 00:06:24.721 "ublk_start_disk", 00:06:24.721 "ublk_destroy_target", 00:06:24.721 "ublk_create_target", 00:06:24.721 "nbd_get_disks", 00:06:24.721 "nbd_stop_disk", 00:06:24.721 "nbd_start_disk", 00:06:24.721 "env_dpdk_get_mem_stats", 00:06:24.721 "nvmf_stop_mdns_prr", 00:06:24.721 "nvmf_publish_mdns_prr", 00:06:24.721 "nvmf_subsystem_get_listeners", 00:06:24.721 "nvmf_subsystem_get_qpairs", 00:06:24.721 "nvmf_subsystem_get_controllers", 00:06:24.721 "nvmf_get_stats", 00:06:24.721 "nvmf_get_transports", 00:06:24.721 "nvmf_create_transport", 00:06:24.721 "nvmf_get_targets", 00:06:24.721 "nvmf_delete_target", 00:06:24.721 "nvmf_create_target", 00:06:24.721 "nvmf_subsystem_allow_any_host", 00:06:24.721 "nvmf_subsystem_set_keys", 00:06:24.721 "nvmf_subsystem_remove_host", 00:06:24.721 "nvmf_subsystem_add_host", 00:06:24.721 "nvmf_ns_remove_host", 00:06:24.721 "nvmf_ns_add_host", 00:06:24.721 "nvmf_subsystem_remove_ns", 00:06:24.721 "nvmf_subsystem_set_ns_ana_group", 00:06:24.721 "nvmf_subsystem_add_ns", 00:06:24.721 "nvmf_subsystem_listener_set_ana_state", 00:06:24.721 "nvmf_discovery_get_referrals", 00:06:24.721 "nvmf_discovery_remove_referral", 00:06:24.721 "nvmf_discovery_add_referral", 00:06:24.721 "nvmf_subsystem_remove_listener", 00:06:24.721 "nvmf_subsystem_add_listener", 00:06:24.721 "nvmf_delete_subsystem", 00:06:24.721 "nvmf_create_subsystem", 00:06:24.721 "nvmf_get_subsystems", 00:06:24.721 "nvmf_set_crdt", 00:06:24.721 "nvmf_set_config", 00:06:24.721 "nvmf_set_max_subsystems", 00:06:24.721 "iscsi_get_histogram", 00:06:24.721 "iscsi_enable_histogram", 00:06:24.721 "iscsi_set_options", 00:06:24.721 "iscsi_get_auth_groups", 00:06:24.721 "iscsi_auth_group_remove_secret", 00:06:24.721 "iscsi_auth_group_add_secret", 00:06:24.721 "iscsi_delete_auth_group", 00:06:24.721 "iscsi_create_auth_group", 00:06:24.721 "iscsi_set_discovery_auth", 00:06:24.721 "iscsi_get_options", 00:06:24.721 "iscsi_target_node_request_logout", 00:06:24.721 "iscsi_target_node_set_redirect", 00:06:24.721 "iscsi_target_node_set_auth", 00:06:24.721 "iscsi_target_node_add_lun", 00:06:24.721 "iscsi_get_stats", 00:06:24.721 "iscsi_get_connections", 00:06:24.721 "iscsi_portal_group_set_auth", 00:06:24.721 "iscsi_start_portal_group", 00:06:24.721 "iscsi_delete_portal_group", 00:06:24.721 "iscsi_create_portal_group", 00:06:24.721 "iscsi_get_portal_groups", 00:06:24.721 "iscsi_delete_target_node", 00:06:24.721 "iscsi_target_node_remove_pg_ig_maps", 00:06:24.721 "iscsi_target_node_add_pg_ig_maps", 00:06:24.721 "iscsi_create_target_node", 00:06:24.721 "iscsi_get_target_nodes", 00:06:24.721 "iscsi_delete_initiator_group", 00:06:24.721 "iscsi_initiator_group_remove_initiators", 00:06:24.721 "iscsi_initiator_group_add_initiators", 00:06:24.721 "iscsi_create_initiator_group", 00:06:24.721 "iscsi_get_initiator_groups", 00:06:24.721 "fsdev_aio_delete", 00:06:24.721 "fsdev_aio_create", 00:06:24.721 "keyring_linux_set_options", 00:06:24.721 "keyring_file_remove_key", 00:06:24.721 "keyring_file_add_key", 00:06:24.721 "vfu_virtio_create_fs_endpoint", 00:06:24.721 "vfu_virtio_create_scsi_endpoint", 00:06:24.721 "vfu_virtio_scsi_remove_target", 00:06:24.721 "vfu_virtio_scsi_add_target", 00:06:24.721 "vfu_virtio_create_blk_endpoint", 00:06:24.721 "vfu_virtio_delete_endpoint", 00:06:24.721 "iaa_scan_accel_module", 00:06:24.721 "dsa_scan_accel_module", 00:06:24.721 "ioat_scan_accel_module", 00:06:24.721 "accel_error_inject_error", 00:06:24.721 "bdev_iscsi_delete", 00:06:24.721 "bdev_iscsi_create", 00:06:24.721 "bdev_iscsi_set_options", 00:06:24.721 "bdev_virtio_attach_controller", 00:06:24.721 "bdev_virtio_scsi_get_devices", 00:06:24.721 "bdev_virtio_detach_controller", 00:06:24.721 "bdev_virtio_blk_set_hotplug", 00:06:24.721 "bdev_ftl_set_property", 00:06:24.721 "bdev_ftl_get_properties", 00:06:24.721 "bdev_ftl_get_stats", 00:06:24.721 "bdev_ftl_unmap", 00:06:24.721 "bdev_ftl_unload", 00:06:24.721 "bdev_ftl_delete", 00:06:24.721 "bdev_ftl_load", 00:06:24.721 "bdev_ftl_create", 00:06:24.721 "bdev_aio_delete", 00:06:24.721 "bdev_aio_rescan", 00:06:24.721 "bdev_aio_create", 00:06:24.722 "blobfs_create", 00:06:24.722 "blobfs_detect", 00:06:24.722 "blobfs_set_cache_size", 00:06:24.722 "bdev_zone_block_delete", 00:06:24.722 "bdev_zone_block_create", 00:06:24.722 "bdev_delay_delete", 00:06:24.722 "bdev_delay_create", 00:06:24.722 "bdev_delay_update_latency", 00:06:24.722 "bdev_split_delete", 00:06:24.722 "bdev_split_create", 00:06:24.722 "bdev_error_inject_error", 00:06:24.722 "bdev_error_delete", 00:06:24.722 "bdev_error_create", 00:06:24.722 "bdev_raid_set_options", 00:06:24.722 "bdev_raid_remove_base_bdev", 00:06:24.722 "bdev_raid_add_base_bdev", 00:06:24.722 "bdev_raid_delete", 00:06:24.722 "bdev_raid_create", 00:06:24.722 "bdev_raid_get_bdevs", 00:06:24.722 "bdev_lvol_set_parent_bdev", 00:06:24.722 "bdev_lvol_set_parent", 00:06:24.722 "bdev_lvol_check_shallow_copy", 00:06:24.722 "bdev_lvol_start_shallow_copy", 00:06:24.722 "bdev_lvol_grow_lvstore", 00:06:24.722 "bdev_lvol_get_lvols", 00:06:24.722 "bdev_lvol_get_lvstores", 00:06:24.722 "bdev_lvol_delete", 00:06:24.722 "bdev_lvol_set_read_only", 00:06:24.722 "bdev_lvol_resize", 00:06:24.722 "bdev_lvol_decouple_parent", 00:06:24.722 "bdev_lvol_inflate", 00:06:24.722 "bdev_lvol_rename", 00:06:24.722 "bdev_lvol_clone_bdev", 00:06:24.722 "bdev_lvol_clone", 00:06:24.722 "bdev_lvol_snapshot", 00:06:24.722 "bdev_lvol_create", 00:06:24.722 "bdev_lvol_delete_lvstore", 00:06:24.722 "bdev_lvol_rename_lvstore", 00:06:24.722 "bdev_lvol_create_lvstore", 00:06:24.722 "bdev_passthru_delete", 00:06:24.722 "bdev_passthru_create", 00:06:24.722 "bdev_nvme_cuse_unregister", 00:06:24.722 "bdev_nvme_cuse_register", 00:06:24.722 "bdev_opal_new_user", 00:06:24.722 "bdev_opal_set_lock_state", 00:06:24.722 "bdev_opal_delete", 00:06:24.722 "bdev_opal_get_info", 00:06:24.722 "bdev_opal_create", 00:06:24.722 "bdev_nvme_opal_revert", 00:06:24.722 "bdev_nvme_opal_init", 00:06:24.722 "bdev_nvme_send_cmd", 00:06:24.722 "bdev_nvme_set_keys", 00:06:24.722 "bdev_nvme_get_path_iostat", 00:06:24.722 "bdev_nvme_get_mdns_discovery_info", 00:06:24.722 "bdev_nvme_stop_mdns_discovery", 00:06:24.722 "bdev_nvme_start_mdns_discovery", 00:06:24.722 "bdev_nvme_set_multipath_policy", 00:06:24.722 "bdev_nvme_set_preferred_path", 00:06:24.722 "bdev_nvme_get_io_paths", 00:06:24.722 "bdev_nvme_remove_error_injection", 00:06:24.722 "bdev_nvme_add_error_injection", 00:06:24.722 "bdev_nvme_get_discovery_info", 00:06:24.722 "bdev_nvme_stop_discovery", 00:06:24.722 "bdev_nvme_start_discovery", 00:06:24.722 "bdev_nvme_get_controller_health_info", 00:06:24.722 "bdev_nvme_disable_controller", 00:06:24.722 "bdev_nvme_enable_controller", 00:06:24.722 "bdev_nvme_reset_controller", 00:06:24.722 "bdev_nvme_get_transport_statistics", 00:06:24.722 "bdev_nvme_apply_firmware", 00:06:24.722 "bdev_nvme_detach_controller", 00:06:24.722 "bdev_nvme_get_controllers", 00:06:24.722 "bdev_nvme_attach_controller", 00:06:24.722 "bdev_nvme_set_hotplug", 00:06:24.722 "bdev_nvme_set_options", 00:06:24.722 "bdev_null_resize", 00:06:24.722 "bdev_null_delete", 00:06:24.722 "bdev_null_create", 00:06:24.722 "bdev_malloc_delete", 00:06:24.722 "bdev_malloc_create" 00:06:24.722 ] 00:06:24.722 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:24.722 12:28:04 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:24.722 12:28:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.722 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:24.722 12:28:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 662271 00:06:24.722 12:28:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 662271 ']' 00:06:24.722 12:28:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 662271 00:06:24.722 12:28:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:24.722 12:28:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.722 12:28:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 662271 00:06:24.722 12:28:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.722 12:28:04 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.722 12:28:04 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 662271' 00:06:24.722 killing process with pid 662271 00:06:24.722 12:28:04 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 662271 00:06:24.722 12:28:04 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 662271 00:06:24.980 00:06:24.980 real 0m1.216s 00:06:24.980 user 0m2.013s 00:06:24.980 sys 0m0.504s 00:06:24.980 12:28:05 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.980 12:28:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.980 ************************************ 00:06:24.980 END TEST spdkcli_tcp 00:06:24.980 ************************************ 00:06:25.239 12:28:05 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:25.239 12:28:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.239 12:28:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.239 12:28:05 -- common/autotest_common.sh@10 -- # set +x 00:06:25.239 ************************************ 00:06:25.239 START TEST dpdk_mem_utility 00:06:25.239 ************************************ 00:06:25.239 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:25.239 * Looking for test storage... 00:06:25.239 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:06:25.239 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:25.239 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:25.239 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:25.239 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.239 12:28:05 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:25.497 12:28:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:25.497 12:28:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.497 12:28:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:25.497 12:28:05 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.497 12:28:05 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.497 12:28:05 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.497 12:28:05 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:25.497 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.497 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:25.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.497 --rc genhtml_branch_coverage=1 00:06:25.497 --rc genhtml_function_coverage=1 00:06:25.497 --rc genhtml_legend=1 00:06:25.497 --rc geninfo_all_blocks=1 00:06:25.497 --rc geninfo_unexecuted_blocks=1 00:06:25.497 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:25.497 ' 00:06:25.497 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:25.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.497 --rc genhtml_branch_coverage=1 00:06:25.497 --rc genhtml_function_coverage=1 00:06:25.497 --rc genhtml_legend=1 00:06:25.497 --rc geninfo_all_blocks=1 00:06:25.497 --rc geninfo_unexecuted_blocks=1 00:06:25.497 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:25.497 ' 00:06:25.497 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:25.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.497 --rc genhtml_branch_coverage=1 00:06:25.497 --rc genhtml_function_coverage=1 00:06:25.497 --rc genhtml_legend=1 00:06:25.497 --rc geninfo_all_blocks=1 00:06:25.497 --rc geninfo_unexecuted_blocks=1 00:06:25.497 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:25.497 ' 00:06:25.497 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:25.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.497 --rc genhtml_branch_coverage=1 00:06:25.497 --rc genhtml_function_coverage=1 00:06:25.497 --rc genhtml_legend=1 00:06:25.497 --rc geninfo_all_blocks=1 00:06:25.497 --rc geninfo_unexecuted_blocks=1 00:06:25.497 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:25.497 ' 00:06:25.497 12:28:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:25.497 12:28:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=662519 00:06:25.498 12:28:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 662519 00:06:25.498 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 662519 ']' 00:06:25.498 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.498 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.498 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.498 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.498 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:25.498 12:28:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:25.498 [2024-11-15 12:28:05.611698] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:25.498 [2024-11-15 12:28:05.611783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662519 ] 00:06:25.498 [2024-11-15 12:28:05.698610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.498 [2024-11-15 12:28:05.746608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.756 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.756 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:25.756 12:28:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:25.756 12:28:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:25.756 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.756 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:25.756 { 00:06:25.756 "filename": "/tmp/spdk_mem_dump.txt" 00:06:25.757 } 00:06:25.757 12:28:05 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.757 12:28:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:25.757 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:25.757 1 heaps totaling size 810.000000 MiB 00:06:25.757 size: 810.000000 MiB heap id: 0 00:06:25.757 end heaps---------- 00:06:25.757 9 mempools totaling size 595.772034 MiB 00:06:25.757 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:25.757 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:25.757 size: 92.545471 MiB name: bdev_io_662519 00:06:25.757 size: 50.003479 MiB name: msgpool_662519 00:06:25.757 size: 36.509338 MiB name: fsdev_io_662519 00:06:25.757 size: 21.763794 MiB name: PDU_Pool 00:06:25.757 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:25.757 size: 4.133484 MiB name: evtpool_662519 00:06:25.757 size: 0.026123 MiB name: Session_Pool 00:06:25.757 end mempools------- 00:06:25.757 6 memzones totaling size 4.142822 MiB 00:06:25.757 size: 1.000366 MiB name: RG_ring_0_662519 00:06:25.757 size: 1.000366 MiB name: RG_ring_1_662519 00:06:25.757 size: 1.000366 MiB name: RG_ring_4_662519 00:06:25.757 size: 1.000366 MiB name: RG_ring_5_662519 00:06:25.757 size: 0.125366 MiB name: RG_ring_2_662519 00:06:25.757 size: 0.015991 MiB name: RG_ring_3_662519 00:06:25.757 end memzones------- 00:06:25.757 12:28:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:25.757 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:25.757 list of free elements. size: 10.862488 MiB 00:06:25.757 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:25.757 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:25.757 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:25.757 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:25.757 element at address: 0x200008000000 with size: 0.959839 MiB 00:06:25.757 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:25.757 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:25.757 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:25.757 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:25.757 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:25.757 element at address: 0x200003e00000 with size: 0.490723 MiB 00:06:25.757 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:25.757 element at address: 0x200010600000 with size: 0.481934 MiB 00:06:25.757 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:25.757 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:25.757 list of standard malloc elements. size: 199.218628 MiB 00:06:25.757 element at address: 0x2000081fff80 with size: 132.000122 MiB 00:06:25.757 element at address: 0x200003ffff80 with size: 64.000122 MiB 00:06:25.757 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:25.757 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:25.757 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:25.757 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:25.757 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:25.757 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:25.757 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:25.757 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:25.757 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:25.757 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:25.757 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:25.757 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:25.757 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:25.757 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:25.757 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:25.757 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:25.757 element at address: 0x20000085b100 with size: 0.000183 MiB 00:06:25.757 element at address: 0x2000008db3c0 with size: 0.000183 MiB 00:06:25.757 element at address: 0x2000008db5c0 with size: 0.000183 MiB 00:06:25.757 element at address: 0x2000008df880 with size: 0.000183 MiB 00:06:25.757 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:25.757 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:25.757 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:25.757 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:25.757 element at address: 0x200003e7da00 with size: 0.000183 MiB 00:06:25.757 element at address: 0x200003e7dac0 with size: 0.000183 MiB 00:06:25.757 element at address: 0x200003efdd80 with size: 0.000183 MiB 00:06:25.757 element at address: 0x2000080fdd80 with size: 0.000183 MiB 00:06:25.757 element at address: 0x20001067b600 with size: 0.000183 MiB 00:06:25.757 element at address: 0x20001067b6c0 with size: 0.000183 MiB 00:06:25.757 element at address: 0x2000106fb980 with size: 0.000183 MiB 00:06:25.757 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:25.757 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:25.757 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:25.757 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:25.757 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:25.757 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:25.757 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:25.757 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:25.757 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:25.757 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:25.757 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:25.757 list of memzone associated elements. size: 599.918884 MiB 00:06:25.757 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:25.757 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:25.757 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:25.757 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:25.757 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:25.757 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_662519_0 00:06:25.757 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:25.757 associated memzone info: size: 48.002930 MiB name: MP_msgpool_662519_0 00:06:25.757 element at address: 0x2000107fdb80 with size: 36.008911 MiB 00:06:25.757 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_662519_0 00:06:25.757 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:25.757 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:25.757 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:25.757 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:25.757 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:25.757 associated memzone info: size: 3.000122 MiB name: MP_evtpool_662519_0 00:06:25.757 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:25.757 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_662519 00:06:25.757 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:25.757 associated memzone info: size: 1.007996 MiB name: MP_evtpool_662519 00:06:25.757 element at address: 0x2000106fba40 with size: 1.008118 MiB 00:06:25.757 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:25.757 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:25.757 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:25.757 element at address: 0x2000080fde40 with size: 1.008118 MiB 00:06:25.757 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:25.757 element at address: 0x200003efde40 with size: 1.008118 MiB 00:06:25.757 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:25.757 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:25.757 associated memzone info: size: 1.000366 MiB name: RG_ring_0_662519 00:06:25.757 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:25.757 associated memzone info: size: 1.000366 MiB name: RG_ring_1_662519 00:06:25.757 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:25.757 associated memzone info: size: 1.000366 MiB name: RG_ring_4_662519 00:06:25.757 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:25.757 associated memzone info: size: 1.000366 MiB name: RG_ring_5_662519 00:06:25.757 element at address: 0x20000085b1c0 with size: 0.500488 MiB 00:06:25.757 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_662519 00:06:25.757 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:25.757 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_662519 00:06:25.757 element at address: 0x20001067b780 with size: 0.500488 MiB 00:06:25.757 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:25.757 element at address: 0x200003e7db80 with size: 0.500488 MiB 00:06:25.757 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:25.757 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:25.758 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:25.758 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:25.758 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_662519 00:06:25.758 element at address: 0x2000008df940 with size: 0.125488 MiB 00:06:25.758 associated memzone info: size: 0.125366 MiB name: RG_ring_2_662519 00:06:25.758 element at address: 0x2000080f5b80 with size: 0.031738 MiB 00:06:25.758 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:25.758 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:25.758 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:25.758 element at address: 0x2000008db680 with size: 0.016113 MiB 00:06:25.758 associated memzone info: size: 0.015991 MiB name: RG_ring_3_662519 00:06:25.758 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:25.758 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:25.758 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:25.758 associated memzone info: size: 0.000183 MiB name: MP_msgpool_662519 00:06:25.758 element at address: 0x2000008db480 with size: 0.000305 MiB 00:06:25.758 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_662519 00:06:25.758 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:25.758 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_662519 00:06:25.758 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:25.758 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:25.758 12:28:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:25.758 12:28:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 662519 00:06:25.758 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 662519 ']' 00:06:25.758 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 662519 00:06:25.758 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:25.758 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.758 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 662519 00:06:26.017 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.017 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.017 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 662519' 00:06:26.017 killing process with pid 662519 00:06:26.017 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 662519 00:06:26.017 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 662519 00:06:26.275 00:06:26.275 real 0m1.060s 00:06:26.275 user 0m0.929s 00:06:26.275 sys 0m0.473s 00:06:26.275 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.275 12:28:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:26.275 ************************************ 00:06:26.275 END TEST dpdk_mem_utility 00:06:26.275 ************************************ 00:06:26.275 12:28:06 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:26.275 12:28:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.275 12:28:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.275 12:28:06 -- common/autotest_common.sh@10 -- # set +x 00:06:26.275 ************************************ 00:06:26.275 START TEST event 00:06:26.275 ************************************ 00:06:26.275 12:28:06 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:26.534 * Looking for test storage... 00:06:26.534 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:26.534 12:28:06 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:26.534 12:28:06 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:26.534 12:28:06 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:26.534 12:28:06 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:26.534 12:28:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.534 12:28:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.534 12:28:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.534 12:28:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.534 12:28:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.534 12:28:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.534 12:28:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.534 12:28:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.534 12:28:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.534 12:28:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.534 12:28:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.534 12:28:06 event -- scripts/common.sh@344 -- # case "$op" in 00:06:26.534 12:28:06 event -- scripts/common.sh@345 -- # : 1 00:06:26.534 12:28:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.534 12:28:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.534 12:28:06 event -- scripts/common.sh@365 -- # decimal 1 00:06:26.534 12:28:06 event -- scripts/common.sh@353 -- # local d=1 00:06:26.534 12:28:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.534 12:28:06 event -- scripts/common.sh@355 -- # echo 1 00:06:26.534 12:28:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.534 12:28:06 event -- scripts/common.sh@366 -- # decimal 2 00:06:26.534 12:28:06 event -- scripts/common.sh@353 -- # local d=2 00:06:26.534 12:28:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.534 12:28:06 event -- scripts/common.sh@355 -- # echo 2 00:06:26.534 12:28:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.534 12:28:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.534 12:28:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.534 12:28:06 event -- scripts/common.sh@368 -- # return 0 00:06:26.534 12:28:06 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.534 12:28:06 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:26.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.534 --rc genhtml_branch_coverage=1 00:06:26.534 --rc genhtml_function_coverage=1 00:06:26.534 --rc genhtml_legend=1 00:06:26.534 --rc geninfo_all_blocks=1 00:06:26.534 --rc geninfo_unexecuted_blocks=1 00:06:26.534 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.534 ' 00:06:26.534 12:28:06 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:26.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.534 --rc genhtml_branch_coverage=1 00:06:26.534 --rc genhtml_function_coverage=1 00:06:26.534 --rc genhtml_legend=1 00:06:26.534 --rc geninfo_all_blocks=1 00:06:26.534 --rc geninfo_unexecuted_blocks=1 00:06:26.534 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.534 ' 00:06:26.534 12:28:06 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:26.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.534 --rc genhtml_branch_coverage=1 00:06:26.534 --rc genhtml_function_coverage=1 00:06:26.534 --rc genhtml_legend=1 00:06:26.534 --rc geninfo_all_blocks=1 00:06:26.534 --rc geninfo_unexecuted_blocks=1 00:06:26.534 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.534 ' 00:06:26.534 12:28:06 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:26.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.534 --rc genhtml_branch_coverage=1 00:06:26.535 --rc genhtml_function_coverage=1 00:06:26.535 --rc genhtml_legend=1 00:06:26.535 --rc geninfo_all_blocks=1 00:06:26.535 --rc geninfo_unexecuted_blocks=1 00:06:26.535 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.535 ' 00:06:26.535 12:28:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:26.535 12:28:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:26.535 12:28:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:26.535 12:28:06 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:26.535 12:28:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.535 12:28:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.535 ************************************ 00:06:26.535 START TEST event_perf 00:06:26.535 ************************************ 00:06:26.535 12:28:06 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:26.535 Running I/O for 1 seconds...[2024-11-15 12:28:06.802548] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:26.535 [2024-11-15 12:28:06.802632] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662757 ] 00:06:26.793 [2024-11-15 12:28:06.892504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.793 [2024-11-15 12:28:06.945623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.793 [2024-11-15 12:28:06.945723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.793 [2024-11-15 12:28:06.945815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.793 [2024-11-15 12:28:06.945823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.728 Running I/O for 1 seconds... 00:06:27.728 lcore 0: 182889 00:06:27.728 lcore 1: 182888 00:06:27.728 lcore 2: 182888 00:06:27.728 lcore 3: 182890 00:06:27.728 done. 00:06:27.728 00:06:27.728 real 0m1.206s 00:06:27.728 user 0m4.106s 00:06:27.728 sys 0m0.097s 00:06:27.728 12:28:07 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.728 12:28:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.728 ************************************ 00:06:27.728 END TEST event_perf 00:06:27.728 ************************************ 00:06:27.728 12:28:08 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:27.728 12:28:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:27.728 12:28:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.728 12:28:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.728 ************************************ 00:06:27.728 START TEST event_reactor 00:06:27.728 ************************************ 00:06:27.728 12:28:08 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:27.987 [2024-11-15 12:28:08.088994] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:27.987 [2024-11-15 12:28:08.089075] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662935 ] 00:06:27.987 [2024-11-15 12:28:08.176880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.987 [2024-11-15 12:28:08.222709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.920 test_start 00:06:28.920 oneshot 00:06:28.920 tick 100 00:06:28.920 tick 100 00:06:28.920 tick 250 00:06:28.920 tick 100 00:06:28.920 tick 100 00:06:28.920 tick 250 00:06:28.920 tick 100 00:06:28.920 tick 500 00:06:28.920 tick 100 00:06:28.920 tick 100 00:06:28.920 tick 250 00:06:28.920 tick 100 00:06:28.920 tick 100 00:06:28.920 test_end 00:06:29.178 00:06:29.178 real 0m1.194s 00:06:29.178 user 0m1.100s 00:06:29.178 sys 0m0.089s 00:06:29.178 12:28:09 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.178 12:28:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:29.178 ************************************ 00:06:29.178 END TEST event_reactor 00:06:29.178 ************************************ 00:06:29.178 12:28:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:29.178 12:28:09 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:29.178 12:28:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.178 12:28:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.178 ************************************ 00:06:29.178 START TEST event_reactor_perf 00:06:29.178 ************************************ 00:06:29.178 12:28:09 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:29.178 [2024-11-15 12:28:09.350326] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:29.178 [2024-11-15 12:28:09.350391] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid663084 ] 00:06:29.178 [2024-11-15 12:28:09.434235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.178 [2024-11-15 12:28:09.480855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.552 test_start 00:06:30.552 test_end 00:06:30.552 Performance: 952872 events per second 00:06:30.552 00:06:30.552 real 0m1.183s 00:06:30.552 user 0m1.091s 00:06:30.552 sys 0m0.088s 00:06:30.552 12:28:10 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.552 12:28:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:30.552 ************************************ 00:06:30.552 END TEST event_reactor_perf 00:06:30.552 ************************************ 00:06:30.552 12:28:10 event -- event/event.sh@49 -- # uname -s 00:06:30.552 12:28:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:30.552 12:28:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:30.552 12:28:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.552 12:28:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.552 12:28:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.552 ************************************ 00:06:30.552 START TEST event_scheduler 00:06:30.552 ************************************ 00:06:30.552 12:28:10 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:30.552 * Looking for test storage... 00:06:30.552 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:06:30.552 12:28:10 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:30.552 12:28:10 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:30.552 12:28:10 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:30.552 12:28:10 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.552 12:28:10 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:30.552 12:28:10 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.552 12:28:10 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:30.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.552 --rc genhtml_branch_coverage=1 00:06:30.552 --rc genhtml_function_coverage=1 00:06:30.552 --rc genhtml_legend=1 00:06:30.552 --rc geninfo_all_blocks=1 00:06:30.552 --rc geninfo_unexecuted_blocks=1 00:06:30.552 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:30.552 ' 00:06:30.552 12:28:10 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:30.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.552 --rc genhtml_branch_coverage=1 00:06:30.552 --rc genhtml_function_coverage=1 00:06:30.552 --rc genhtml_legend=1 00:06:30.552 --rc geninfo_all_blocks=1 00:06:30.552 --rc geninfo_unexecuted_blocks=1 00:06:30.552 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:30.552 ' 00:06:30.552 12:28:10 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:30.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.552 --rc genhtml_branch_coverage=1 00:06:30.552 --rc genhtml_function_coverage=1 00:06:30.552 --rc genhtml_legend=1 00:06:30.552 --rc geninfo_all_blocks=1 00:06:30.552 --rc geninfo_unexecuted_blocks=1 00:06:30.552 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:30.552 ' 00:06:30.552 12:28:10 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:30.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.552 --rc genhtml_branch_coverage=1 00:06:30.552 --rc genhtml_function_coverage=1 00:06:30.552 --rc genhtml_legend=1 00:06:30.552 --rc geninfo_all_blocks=1 00:06:30.552 --rc geninfo_unexecuted_blocks=1 00:06:30.552 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:30.552 ' 00:06:30.552 12:28:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:30.552 12:28:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=663366 00:06:30.552 12:28:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:30.552 12:28:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:30.552 12:28:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 663366 00:06:30.552 12:28:10 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 663366 ']' 00:06:30.552 12:28:10 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.552 12:28:10 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.552 12:28:10 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.552 12:28:10 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.552 12:28:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.552 [2024-11-15 12:28:10.810936] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:30.552 [2024-11-15 12:28:10.811033] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid663366 ] 00:06:30.813 [2024-11-15 12:28:10.895465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.813 [2024-11-15 12:28:10.944874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.813 [2024-11-15 12:28:10.944949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.813 [2024-11-15 12:28:10.945026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.813 [2024-11-15 12:28:10.945027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.813 12:28:10 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.813 12:28:10 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:30.814 12:28:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:30.814 12:28:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.814 12:28:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.814 [2024-11-15 12:28:11.001650] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:30.814 [2024-11-15 12:28:11.001671] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:30.814 [2024-11-15 12:28:11.001683] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:30.814 [2024-11-15 12:28:11.001691] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:30.814 [2024-11-15 12:28:11.001698] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:30.814 12:28:11 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.814 12:28:11 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:30.814 12:28:11 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.814 12:28:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.814 [2024-11-15 12:28:11.078271] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:30.814 12:28:11 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.814 12:28:11 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:30.814 12:28:11 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.814 12:28:11 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.814 12:28:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.814 ************************************ 00:06:30.814 START TEST scheduler_create_thread 00:06:30.814 ************************************ 00:06:30.814 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:30.814 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:30.814 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.814 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.814 2 00:06:30.814 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.815 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:30.815 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.815 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.815 3 00:06:30.815 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.815 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:30.815 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.815 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.076 4 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.076 5 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.076 6 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.076 7 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.076 8 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.076 9 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.076 10 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.076 12:28:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.010 12:28:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.010 12:28:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:32.010 12:28:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.010 12:28:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.381 12:28:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.381 12:28:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:33.381 12:28:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:33.381 12:28:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.381 12:28:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.312 12:28:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.312 00:06:34.312 real 0m3.383s 00:06:34.312 user 0m0.026s 00:06:34.312 sys 0m0.005s 00:06:34.312 12:28:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.312 12:28:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.312 ************************************ 00:06:34.312 END TEST scheduler_create_thread 00:06:34.312 ************************************ 00:06:34.312 12:28:14 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:34.312 12:28:14 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 663366 00:06:34.312 12:28:14 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 663366 ']' 00:06:34.312 12:28:14 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 663366 00:06:34.312 12:28:14 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:34.312 12:28:14 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.312 12:28:14 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 663366 00:06:34.312 12:28:14 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:34.312 12:28:14 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:34.312 12:28:14 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 663366' 00:06:34.312 killing process with pid 663366 00:06:34.312 12:28:14 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 663366 00:06:34.312 12:28:14 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 663366 00:06:34.576 [2024-11-15 12:28:14.882482] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:34.834 00:06:34.834 real 0m4.495s 00:06:34.834 user 0m7.903s 00:06:34.834 sys 0m0.416s 00:06:34.834 12:28:15 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.835 12:28:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.835 ************************************ 00:06:34.835 END TEST event_scheduler 00:06:34.835 ************************************ 00:06:34.835 12:28:15 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:34.835 12:28:15 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:34.835 12:28:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.835 12:28:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.835 12:28:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.093 ************************************ 00:06:35.093 START TEST app_repeat 00:06:35.093 ************************************ 00:06:35.093 12:28:15 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:35.093 12:28:15 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.093 12:28:15 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.093 12:28:15 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:35.093 12:28:15 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.093 12:28:15 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:35.093 12:28:15 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:35.093 12:28:15 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:35.093 12:28:15 event.app_repeat -- event/event.sh@19 -- # repeat_pid=663970 00:06:35.093 12:28:15 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.093 12:28:15 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:35.093 12:28:15 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 663970' 00:06:35.093 Process app_repeat pid: 663970 00:06:35.093 12:28:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:35.093 12:28:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:35.093 spdk_app_start Round 0 00:06:35.093 12:28:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 663970 /var/tmp/spdk-nbd.sock 00:06:35.093 12:28:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 663970 ']' 00:06:35.093 12:28:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.093 12:28:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.093 12:28:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.093 12:28:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.093 12:28:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.093 [2024-11-15 12:28:15.222985] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:35.093 [2024-11-15 12:28:15.223072] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid663970 ] 00:06:35.093 [2024-11-15 12:28:15.312052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.093 [2024-11-15 12:28:15.361665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.093 [2024-11-15 12:28:15.361667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.351 12:28:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.351 12:28:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:35.351 12:28:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.351 Malloc0 00:06:35.351 12:28:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.609 Malloc1 00:06:35.609 12:28:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.609 12:28:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.609 12:28:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.609 12:28:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:35.609 12:28:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.609 12:28:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:35.609 12:28:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.609 12:28:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.609 12:28:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.609 12:28:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:35.609 12:28:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.609 12:28:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:35.609 12:28:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:35.609 12:28:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:35.609 12:28:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.609 12:28:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:35.867 /dev/nbd0 00:06:35.867 12:28:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:35.867 12:28:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:35.867 12:28:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:35.867 12:28:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:35.867 12:28:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:35.867 12:28:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:35.867 12:28:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:35.867 12:28:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:35.867 12:28:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:35.867 12:28:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:35.867 12:28:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.867 1+0 records in 00:06:35.867 1+0 records out 00:06:35.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001064 s, 38.5 MB/s 00:06:35.867 12:28:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:35.867 12:28:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:35.867 12:28:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:35.867 12:28:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:35.867 12:28:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:35.867 12:28:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.867 12:28:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.867 12:28:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:36.125 /dev/nbd1 00:06:36.125 12:28:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:36.125 12:28:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:36.125 12:28:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:36.125 12:28:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:36.125 12:28:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:36.125 12:28:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:36.125 12:28:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:36.125 12:28:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:36.125 12:28:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:36.125 12:28:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:36.125 12:28:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.125 1+0 records in 00:06:36.125 1+0 records out 00:06:36.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236199 s, 17.3 MB/s 00:06:36.125 12:28:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:36.125 12:28:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:36.125 12:28:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:36.125 12:28:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:36.125 12:28:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:36.125 12:28:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.125 12:28:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.125 12:28:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.125 12:28:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.125 12:28:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.383 { 00:06:36.383 "nbd_device": "/dev/nbd0", 00:06:36.383 "bdev_name": "Malloc0" 00:06:36.383 }, 00:06:36.383 { 00:06:36.383 "nbd_device": "/dev/nbd1", 00:06:36.383 "bdev_name": "Malloc1" 00:06:36.383 } 00:06:36.383 ]' 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.383 { 00:06:36.383 "nbd_device": "/dev/nbd0", 00:06:36.383 "bdev_name": "Malloc0" 00:06:36.383 }, 00:06:36.383 { 00:06:36.383 "nbd_device": "/dev/nbd1", 00:06:36.383 "bdev_name": "Malloc1" 00:06:36.383 } 00:06:36.383 ]' 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.383 /dev/nbd1' 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.383 /dev/nbd1' 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:36.383 256+0 records in 00:06:36.383 256+0 records out 00:06:36.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115931 s, 90.4 MB/s 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.383 256+0 records in 00:06:36.383 256+0 records out 00:06:36.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199795 s, 52.5 MB/s 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:36.383 256+0 records in 00:06:36.383 256+0 records out 00:06:36.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223848 s, 46.8 MB/s 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.383 12:28:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.641 12:28:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.641 12:28:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.641 12:28:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.641 12:28:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.641 12:28:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.641 12:28:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.641 12:28:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.641 12:28:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.641 12:28:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.641 12:28:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:36.899 12:28:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:36.899 12:28:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:36.899 12:28:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:36.899 12:28:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.899 12:28:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.899 12:28:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:36.899 12:28:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.899 12:28:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.899 12:28:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.899 12:28:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.899 12:28:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.156 12:28:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.156 12:28:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.156 12:28:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.156 12:28:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.156 12:28:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.156 12:28:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.156 12:28:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:37.156 12:28:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.156 12:28:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.156 12:28:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:37.156 12:28:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:37.156 12:28:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:37.156 12:28:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:37.414 12:28:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.414 [2024-11-15 12:28:17.753097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.672 [2024-11-15 12:28:17.799342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.672 [2024-11-15 12:28:17.799344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.672 [2024-11-15 12:28:17.846271] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:37.672 [2024-11-15 12:28:17.846327] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:40.276 12:28:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:40.276 12:28:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:40.276 spdk_app_start Round 1 00:06:40.276 12:28:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 663970 /var/tmp/spdk-nbd.sock 00:06:40.276 12:28:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 663970 ']' 00:06:40.276 12:28:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.276 12:28:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.276 12:28:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.276 12:28:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.276 12:28:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:40.534 12:28:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.534 12:28:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:40.534 12:28:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.792 Malloc0 00:06:40.792 12:28:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.049 Malloc1 00:06:41.049 12:28:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.049 12:28:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.049 12:28:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.049 12:28:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:41.049 12:28:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.049 12:28:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:41.049 12:28:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.049 12:28:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.049 12:28:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.049 12:28:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:41.049 12:28:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.049 12:28:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:41.049 12:28:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:41.049 12:28:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:41.049 12:28:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.049 12:28:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:41.049 /dev/nbd0 00:06:41.307 12:28:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.307 12:28:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.307 12:28:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:41.307 12:28:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:41.307 12:28:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.307 12:28:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.307 12:28:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:41.307 12:28:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:41.307 12:28:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.307 12:28:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.307 12:28:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.307 1+0 records in 00:06:41.307 1+0 records out 00:06:41.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000124981 s, 32.8 MB/s 00:06:41.307 12:28:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:41.307 12:28:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:41.307 12:28:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:41.307 12:28:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.307 12:28:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:41.307 12:28:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.307 12:28:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.307 12:28:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.307 /dev/nbd1 00:06:41.565 12:28:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.565 12:28:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.565 12:28:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:41.565 12:28:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:41.565 12:28:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.565 12:28:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.565 12:28:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:41.565 12:28:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:41.565 12:28:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.565 12:28:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.565 12:28:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.565 1+0 records in 00:06:41.565 1+0 records out 00:06:41.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243004 s, 16.9 MB/s 00:06:41.565 12:28:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:41.565 12:28:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:41.565 12:28:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:41.565 12:28:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.565 12:28:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:41.565 12:28:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.565 12:28:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.565 12:28:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.565 12:28:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.565 12:28:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.565 12:28:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.565 { 00:06:41.565 "nbd_device": "/dev/nbd0", 00:06:41.565 "bdev_name": "Malloc0" 00:06:41.565 }, 00:06:41.565 { 00:06:41.565 "nbd_device": "/dev/nbd1", 00:06:41.565 "bdev_name": "Malloc1" 00:06:41.565 } 00:06:41.565 ]' 00:06:41.565 12:28:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.565 12:28:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.565 { 00:06:41.565 "nbd_device": "/dev/nbd0", 00:06:41.565 "bdev_name": "Malloc0" 00:06:41.565 }, 00:06:41.565 { 00:06:41.565 "nbd_device": "/dev/nbd1", 00:06:41.565 "bdev_name": "Malloc1" 00:06:41.565 } 00:06:41.565 ]' 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.824 /dev/nbd1' 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.824 /dev/nbd1' 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:41.824 256+0 records in 00:06:41.824 256+0 records out 00:06:41.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108682 s, 96.5 MB/s 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.824 256+0 records in 00:06:41.824 256+0 records out 00:06:41.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201894 s, 51.9 MB/s 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.824 256+0 records in 00:06:41.824 256+0 records out 00:06:41.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219459 s, 47.8 MB/s 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.824 12:28:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:41.824 12:28:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.824 12:28:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:41.824 12:28:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.824 12:28:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.824 12:28:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.824 12:28:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:41.824 12:28:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.824 12:28:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.082 12:28:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.082 12:28:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.082 12:28:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.082 12:28:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.082 12:28:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.082 12:28:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.082 12:28:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.082 12:28:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.082 12:28:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.082 12:28:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.340 12:28:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.340 12:28:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.340 12:28:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.340 12:28:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.341 12:28:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.341 12:28:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.341 12:28:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.341 12:28:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.341 12:28:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.341 12:28:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.341 12:28:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.341 12:28:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:42.341 12:28:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:42.341 12:28:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.341 12:28:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.341 12:28:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.341 12:28:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.341 12:28:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:42.341 12:28:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:42.599 12:28:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:42.599 12:28:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:42.599 12:28:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:42.599 12:28:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:42.599 12:28:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:42.599 12:28:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:42.856 [2024-11-15 12:28:23.069258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.856 [2024-11-15 12:28:23.114194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.856 [2024-11-15 12:28:23.114195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.856 [2024-11-15 12:28:23.161123] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:42.856 [2024-11-15 12:28:23.161183] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.139 12:28:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:46.139 12:28:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:46.139 spdk_app_start Round 2 00:06:46.139 12:28:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 663970 /var/tmp/spdk-nbd.sock 00:06:46.139 12:28:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 663970 ']' 00:06:46.139 12:28:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.139 12:28:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.139 12:28:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.139 12:28:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.139 12:28:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.139 12:28:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.139 12:28:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:46.139 12:28:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.139 Malloc0 00:06:46.139 12:28:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.397 Malloc1 00:06:46.397 12:28:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.397 12:28:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.397 12:28:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.397 12:28:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:46.397 12:28:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.397 12:28:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:46.397 12:28:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.397 12:28:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.397 12:28:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.397 12:28:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.397 12:28:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.397 12:28:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.397 12:28:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:46.397 12:28:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.397 12:28:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.397 12:28:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:46.397 /dev/nbd0 00:06:46.397 12:28:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.397 12:28:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.397 12:28:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:46.397 12:28:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:46.397 12:28:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.397 12:28:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.397 12:28:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:46.397 12:28:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:46.397 12:28:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.397 12:28:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.397 12:28:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.397 1+0 records in 00:06:46.397 1+0 records out 00:06:46.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242798 s, 16.9 MB/s 00:06:46.397 12:28:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:46.656 12:28:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:46.656 12:28:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:46.656 12:28:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.656 12:28:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:46.656 12:28:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.656 12:28:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.656 12:28:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:46.656 /dev/nbd1 00:06:46.656 12:28:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:46.656 12:28:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:46.656 12:28:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:46.656 12:28:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:46.656 12:28:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.656 12:28:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.656 12:28:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:46.656 12:28:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:46.656 12:28:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.656 12:28:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.656 12:28:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.656 1+0 records in 00:06:46.656 1+0 records out 00:06:46.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263277 s, 15.6 MB/s 00:06:46.656 12:28:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:46.656 12:28:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:46.656 12:28:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:46.656 12:28:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.656 12:28:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:46.656 12:28:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.656 12:28:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.656 12:28:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.656 12:28:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.656 12:28:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.914 12:28:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.914 { 00:06:46.914 "nbd_device": "/dev/nbd0", 00:06:46.914 "bdev_name": "Malloc0" 00:06:46.914 }, 00:06:46.914 { 00:06:46.914 "nbd_device": "/dev/nbd1", 00:06:46.914 "bdev_name": "Malloc1" 00:06:46.914 } 00:06:46.914 ]' 00:06:46.914 12:28:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.914 { 00:06:46.914 "nbd_device": "/dev/nbd0", 00:06:46.914 "bdev_name": "Malloc0" 00:06:46.914 }, 00:06:46.914 { 00:06:46.914 "nbd_device": "/dev/nbd1", 00:06:46.914 "bdev_name": "Malloc1" 00:06:46.914 } 00:06:46.914 ]' 00:06:46.914 12:28:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.914 12:28:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.914 /dev/nbd1' 00:06:46.914 12:28:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.914 /dev/nbd1' 00:06:46.914 12:28:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.914 12:28:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:46.914 12:28:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:46.914 12:28:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:46.914 12:28:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:46.915 12:28:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:46.915 12:28:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.915 12:28:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.915 12:28:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.915 12:28:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.915 12:28:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.915 12:28:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:46.915 256+0 records in 00:06:46.915 256+0 records out 00:06:46.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107006 s, 98.0 MB/s 00:06:46.915 12:28:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.915 12:28:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:47.173 256+0 records in 00:06:47.174 256+0 records out 00:06:47.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204971 s, 51.2 MB/s 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:47.174 256+0 records in 00:06:47.174 256+0 records out 00:06:47.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219695 s, 47.7 MB/s 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.174 12:28:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.432 12:28:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.691 12:28:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.691 12:28:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.691 12:28:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.691 12:28:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.691 12:28:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.691 12:28:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.691 12:28:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:47.691 12:28:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.691 12:28:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.691 12:28:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:47.691 12:28:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:47.691 12:28:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:47.691 12:28:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:47.950 12:28:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:48.209 [2024-11-15 12:28:28.382539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.209 [2024-11-15 12:28:28.427630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.209 [2024-11-15 12:28:28.427631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.209 [2024-11-15 12:28:28.474897] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:48.209 [2024-11-15 12:28:28.474951] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:51.492 12:28:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 663970 /var/tmp/spdk-nbd.sock 00:06:51.492 12:28:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 663970 ']' 00:06:51.492 12:28:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.492 12:28:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.492 12:28:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.492 12:28:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.492 12:28:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.492 12:28:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.492 12:28:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:51.492 12:28:31 event.app_repeat -- event/event.sh@39 -- # killprocess 663970 00:06:51.492 12:28:31 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 663970 ']' 00:06:51.492 12:28:31 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 663970 00:06:51.492 12:28:31 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:51.492 12:28:31 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.492 12:28:31 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 663970 00:06:51.492 12:28:31 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.492 12:28:31 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.492 12:28:31 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 663970' 00:06:51.492 killing process with pid 663970 00:06:51.492 12:28:31 event.app_repeat -- common/autotest_common.sh@973 -- # kill 663970 00:06:51.492 12:28:31 event.app_repeat -- common/autotest_common.sh@978 -- # wait 663970 00:06:51.492 spdk_app_start is called in Round 0. 00:06:51.492 Shutdown signal received, stop current app iteration 00:06:51.492 Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 reinitialization... 00:06:51.492 spdk_app_start is called in Round 1. 00:06:51.492 Shutdown signal received, stop current app iteration 00:06:51.492 Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 reinitialization... 00:06:51.492 spdk_app_start is called in Round 2. 00:06:51.492 Shutdown signal received, stop current app iteration 00:06:51.492 Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 reinitialization... 00:06:51.492 spdk_app_start is called in Round 3. 00:06:51.492 Shutdown signal received, stop current app iteration 00:06:51.492 12:28:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:51.492 12:28:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:51.492 00:06:51.492 real 0m16.412s 00:06:51.492 user 0m35.268s 00:06:51.492 sys 0m3.239s 00:06:51.492 12:28:31 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.492 12:28:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.492 ************************************ 00:06:51.492 END TEST app_repeat 00:06:51.492 ************************************ 00:06:51.492 12:28:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:51.492 12:28:31 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:51.492 12:28:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.492 12:28:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.492 12:28:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.492 ************************************ 00:06:51.492 START TEST cpu_locks 00:06:51.492 ************************************ 00:06:51.492 12:28:31 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:51.492 * Looking for test storage... 00:06:51.492 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:51.492 12:28:31 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:51.492 12:28:31 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:51.492 12:28:31 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:51.753 12:28:31 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.753 12:28:31 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:51.753 12:28:31 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.753 12:28:31 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:51.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.753 --rc genhtml_branch_coverage=1 00:06:51.753 --rc genhtml_function_coverage=1 00:06:51.753 --rc genhtml_legend=1 00:06:51.753 --rc geninfo_all_blocks=1 00:06:51.753 --rc geninfo_unexecuted_blocks=1 00:06:51.753 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:51.753 ' 00:06:51.753 12:28:31 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:51.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.753 --rc genhtml_branch_coverage=1 00:06:51.753 --rc genhtml_function_coverage=1 00:06:51.753 --rc genhtml_legend=1 00:06:51.753 --rc geninfo_all_blocks=1 00:06:51.753 --rc geninfo_unexecuted_blocks=1 00:06:51.753 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:51.753 ' 00:06:51.753 12:28:31 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:51.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.753 --rc genhtml_branch_coverage=1 00:06:51.753 --rc genhtml_function_coverage=1 00:06:51.753 --rc genhtml_legend=1 00:06:51.753 --rc geninfo_all_blocks=1 00:06:51.753 --rc geninfo_unexecuted_blocks=1 00:06:51.753 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:51.753 ' 00:06:51.753 12:28:31 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:51.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.753 --rc genhtml_branch_coverage=1 00:06:51.753 --rc genhtml_function_coverage=1 00:06:51.753 --rc genhtml_legend=1 00:06:51.753 --rc geninfo_all_blocks=1 00:06:51.753 --rc geninfo_unexecuted_blocks=1 00:06:51.753 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:51.753 ' 00:06:51.753 12:28:31 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:51.753 12:28:31 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:51.753 12:28:31 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:51.753 12:28:31 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:51.753 12:28:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.753 12:28:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.753 12:28:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.753 ************************************ 00:06:51.753 START TEST default_locks 00:06:51.753 ************************************ 00:06:51.753 12:28:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:51.753 12:28:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=666318 00:06:51.753 12:28:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 666318 00:06:51.753 12:28:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.753 12:28:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 666318 ']' 00:06:51.753 12:28:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.753 12:28:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.753 12:28:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.753 12:28:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.753 12:28:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.753 [2024-11-15 12:28:31.940919] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:51.753 [2024-11-15 12:28:31.940989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666318 ] 00:06:51.753 [2024-11-15 12:28:32.022675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.753 [2024-11-15 12:28:32.070737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.012 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.012 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:52.012 12:28:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 666318 00:06:52.012 12:28:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 666318 00:06:52.012 12:28:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.578 lslocks: write error 00:06:52.578 12:28:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 666318 00:06:52.578 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 666318 ']' 00:06:52.578 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 666318 00:06:52.578 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:52.578 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.578 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 666318 00:06:52.837 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.837 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.837 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 666318' 00:06:52.837 killing process with pid 666318 00:06:52.837 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 666318 00:06:52.837 12:28:32 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 666318 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 666318 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 666318 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 666318 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 666318 ']' 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.096 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (666318) - No such process 00:06:53.096 ERROR: process (pid: 666318) is no longer running 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:53.096 00:06:53.096 real 0m1.377s 00:06:53.096 user 0m1.336s 00:06:53.096 sys 0m0.665s 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.096 12:28:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.096 ************************************ 00:06:53.096 END TEST default_locks 00:06:53.096 ************************************ 00:06:53.096 12:28:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:53.096 12:28:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.096 12:28:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.096 12:28:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.096 ************************************ 00:06:53.096 START TEST default_locks_via_rpc 00:06:53.096 ************************************ 00:06:53.096 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:53.096 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=666526 00:06:53.096 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 666526 00:06:53.096 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.096 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 666526 ']' 00:06:53.096 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.096 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.096 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.096 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.096 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.096 [2024-11-15 12:28:33.391234] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:53.096 [2024-11-15 12:28:33.391300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666526 ] 00:06:53.355 [2024-11-15 12:28:33.482574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.355 [2024-11-15 12:28:33.533755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.613 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.613 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.613 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:53.613 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.613 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.613 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.613 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:53.613 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:53.613 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:53.613 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:53.613 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:53.613 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.613 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.613 12:28:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.613 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 666526 00:06:53.613 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 666526 00:06:53.613 12:28:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.871 12:28:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 666526 00:06:53.871 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 666526 ']' 00:06:53.871 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 666526 00:06:53.871 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:53.871 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.871 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 666526 00:06:53.871 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.871 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.871 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 666526' 00:06:53.871 killing process with pid 666526 00:06:53.871 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 666526 00:06:53.871 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 666526 00:06:54.438 00:06:54.438 real 0m1.138s 00:06:54.438 user 0m1.069s 00:06:54.438 sys 0m0.527s 00:06:54.438 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.438 12:28:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.438 ************************************ 00:06:54.438 END TEST default_locks_via_rpc 00:06:54.438 ************************************ 00:06:54.438 12:28:34 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:54.438 12:28:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.438 12:28:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.438 12:28:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.438 ************************************ 00:06:54.438 START TEST non_locking_app_on_locked_coremask 00:06:54.438 ************************************ 00:06:54.438 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:54.438 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.438 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=666726 00:06:54.438 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 666726 /var/tmp/spdk.sock 00:06:54.438 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 666726 ']' 00:06:54.438 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.438 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.438 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.438 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.438 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.438 [2024-11-15 12:28:34.598463] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:54.438 [2024-11-15 12:28:34.598523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666726 ] 00:06:54.438 [2024-11-15 12:28:34.682965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.438 [2024-11-15 12:28:34.732721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.697 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.697 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:54.697 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=666843 00:06:54.697 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 666843 /var/tmp/spdk2.sock 00:06:54.697 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:54.697 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 666843 ']' 00:06:54.697 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.697 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.697 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.697 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.697 12:28:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.697 [2024-11-15 12:28:35.010649] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:54.697 [2024-11-15 12:28:35.010732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666843 ] 00:06:54.955 [2024-11-15 12:28:35.134526] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.955 [2024-11-15 12:28:35.134560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.955 [2024-11-15 12:28:35.235343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.889 12:28:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.889 12:28:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:55.889 12:28:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 666726 00:06:55.889 12:28:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 666726 00:06:55.889 12:28:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.823 lslocks: write error 00:06:56.823 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 666726 00:06:56.823 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 666726 ']' 00:06:56.823 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 666726 00:06:56.823 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:56.823 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.823 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 666726 00:06:56.823 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.823 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.823 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 666726' 00:06:56.823 killing process with pid 666726 00:06:56.823 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 666726 00:06:56.823 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 666726 00:06:57.390 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 666843 00:06:57.390 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 666843 ']' 00:06:57.390 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 666843 00:06:57.390 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:57.390 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.390 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 666843 00:06:57.648 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.649 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.649 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 666843' 00:06:57.649 killing process with pid 666843 00:06:57.649 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 666843 00:06:57.649 12:28:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 666843 00:06:57.907 00:06:57.908 real 0m3.508s 00:06:57.908 user 0m3.655s 00:06:57.908 sys 0m1.348s 00:06:57.908 12:28:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.908 12:28:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.908 ************************************ 00:06:57.908 END TEST non_locking_app_on_locked_coremask 00:06:57.908 ************************************ 00:06:57.908 12:28:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:57.908 12:28:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.908 12:28:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.908 12:28:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.908 ************************************ 00:06:57.908 START TEST locking_app_on_unlocked_coremask 00:06:57.908 ************************************ 00:06:57.908 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:57.908 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=667286 00:06:57.908 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 667286 /var/tmp/spdk.sock 00:06:57.908 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:57.908 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 667286 ']' 00:06:57.908 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.908 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.908 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.908 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.908 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.908 [2024-11-15 12:28:38.202114] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:57.908 [2024-11-15 12:28:38.202178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667286 ] 00:06:58.167 [2024-11-15 12:28:38.289378] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.167 [2024-11-15 12:28:38.289414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.167 [2024-11-15 12:28:38.337084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.426 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.426 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:58.426 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=667291 00:06:58.426 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 667291 /var/tmp/spdk2.sock 00:06:58.426 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:58.426 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 667291 ']' 00:06:58.426 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.426 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.426 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.426 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.426 12:28:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.426 [2024-11-15 12:28:38.574356] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:58.426 [2024-11-15 12:28:38.574419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667291 ] 00:06:58.426 [2024-11-15 12:28:38.685132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.684 [2024-11-15 12:28:38.777344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.249 12:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.249 12:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:59.249 12:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 667291 00:06:59.249 12:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 667291 00:06:59.249 12:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.183 lslocks: write error 00:07:00.183 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 667286 00:07:00.183 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 667286 ']' 00:07:00.183 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 667286 00:07:00.183 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:00.183 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.183 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 667286 00:07:00.183 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.183 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.183 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 667286' 00:07:00.183 killing process with pid 667286 00:07:00.183 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 667286 00:07:00.183 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 667286 00:07:00.749 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 667291 00:07:00.749 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 667291 ']' 00:07:00.749 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 667291 00:07:00.749 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:00.749 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.749 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 667291 00:07:00.749 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.749 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.749 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 667291' 00:07:00.749 killing process with pid 667291 00:07:00.749 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 667291 00:07:00.749 12:28:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 667291 00:07:01.007 00:07:01.007 real 0m3.111s 00:07:01.007 user 0m3.240s 00:07:01.007 sys 0m1.127s 00:07:01.008 12:28:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.008 12:28:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.008 ************************************ 00:07:01.008 END TEST locking_app_on_unlocked_coremask 00:07:01.008 ************************************ 00:07:01.008 12:28:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:01.008 12:28:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.008 12:28:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.008 12:28:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.266 ************************************ 00:07:01.266 START TEST locking_app_on_locked_coremask 00:07:01.266 ************************************ 00:07:01.266 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:01.266 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=667682 00:07:01.266 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 667682 /var/tmp/spdk.sock 00:07:01.266 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 667682 ']' 00:07:01.266 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.266 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.266 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.266 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.266 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.266 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:01.266 [2024-11-15 12:28:41.389033] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:01.266 [2024-11-15 12:28:41.389113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667682 ] 00:07:01.266 [2024-11-15 12:28:41.475819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.266 [2024-11-15 12:28:41.523969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.524 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.524 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:01.524 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=667821 00:07:01.524 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 667821 /var/tmp/spdk2.sock 00:07:01.524 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:01.524 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 667821 /var/tmp/spdk2.sock 00:07:01.524 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:01.524 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:01.524 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.524 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:01.524 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.524 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 667821 /var/tmp/spdk2.sock 00:07:01.524 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 667821 ']' 00:07:01.524 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.524 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.524 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.524 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.524 12:28:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.524 [2024-11-15 12:28:41.774877] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:01.524 [2024-11-15 12:28:41.774933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667821 ] 00:07:01.782 [2024-11-15 12:28:41.894142] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 667682 has claimed it. 00:07:01.782 [2024-11-15 12:28:41.894182] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:02.347 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (667821) - No such process 00:07:02.347 ERROR: process (pid: 667821) is no longer running 00:07:02.347 12:28:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.347 12:28:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:02.347 12:28:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:02.347 12:28:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:02.347 12:28:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:02.347 12:28:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:02.347 12:28:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 667682 00:07:02.347 12:28:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 667682 00:07:02.347 12:28:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.913 lslocks: write error 00:07:02.913 12:28:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 667682 00:07:02.913 12:28:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 667682 ']' 00:07:02.913 12:28:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 667682 00:07:02.913 12:28:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:02.913 12:28:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.913 12:28:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 667682 00:07:02.913 12:28:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.913 12:28:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.913 12:28:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 667682' 00:07:02.913 killing process with pid 667682 00:07:02.913 12:28:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 667682 00:07:02.913 12:28:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 667682 00:07:03.172 00:07:03.172 real 0m2.139s 00:07:03.172 user 0m2.230s 00:07:03.172 sys 0m0.760s 00:07:03.172 12:28:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.172 12:28:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.172 ************************************ 00:07:03.172 END TEST locking_app_on_locked_coremask 00:07:03.172 ************************************ 00:07:03.430 12:28:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:03.430 12:28:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.430 12:28:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.430 12:28:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.430 ************************************ 00:07:03.430 START TEST locking_overlapped_coremask 00:07:03.430 ************************************ 00:07:03.430 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:03.430 12:28:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=668056 00:07:03.430 12:28:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 668056 /var/tmp/spdk.sock 00:07:03.430 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 668056 ']' 00:07:03.430 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.430 12:28:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:03.430 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.430 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.430 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.430 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.430 [2024-11-15 12:28:43.581751] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:03.430 [2024-11-15 12:28:43.581794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668056 ] 00:07:03.430 [2024-11-15 12:28:43.667450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.430 [2024-11-15 12:28:43.719257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.430 [2024-11-15 12:28:43.719368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.430 [2024-11-15 12:28:43.719371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.688 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.688 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:03.688 12:28:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=668074 00:07:03.688 12:28:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 668074 /var/tmp/spdk2.sock 00:07:03.688 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:03.688 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 668074 /var/tmp/spdk2.sock 00:07:03.688 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:03.688 12:28:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:03.688 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.688 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:03.688 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.688 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 668074 /var/tmp/spdk2.sock 00:07:03.688 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 668074 ']' 00:07:03.688 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.688 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.689 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.689 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.689 12:28:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.689 [2024-11-15 12:28:43.960275] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:03.689 [2024-11-15 12:28:43.960353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668074 ] 00:07:03.946 [2024-11-15 12:28:44.080335] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 668056 has claimed it. 00:07:03.946 [2024-11-15 12:28:44.080371] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:04.512 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (668074) - No such process 00:07:04.512 ERROR: process (pid: 668074) is no longer running 00:07:04.512 12:28:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.512 12:28:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:04.512 12:28:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:04.512 12:28:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:04.513 12:28:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:04.513 12:28:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:04.513 12:28:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:04.513 12:28:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:04.513 12:28:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:04.513 12:28:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:04.513 12:28:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 668056 00:07:04.513 12:28:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 668056 ']' 00:07:04.513 12:28:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 668056 00:07:04.513 12:28:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:04.513 12:28:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.513 12:28:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 668056 00:07:04.513 12:28:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.513 12:28:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.513 12:28:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 668056' 00:07:04.513 killing process with pid 668056 00:07:04.513 12:28:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 668056 00:07:04.513 12:28:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 668056 00:07:04.771 00:07:04.771 real 0m1.454s 00:07:04.771 user 0m4.013s 00:07:04.771 sys 0m0.443s 00:07:04.771 12:28:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.771 12:28:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.771 ************************************ 00:07:04.771 END TEST locking_overlapped_coremask 00:07:04.771 ************************************ 00:07:04.771 12:28:45 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:04.771 12:28:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.771 12:28:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.771 12:28:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.771 ************************************ 00:07:04.771 START TEST locking_overlapped_coremask_via_rpc 00:07:04.771 ************************************ 00:07:04.771 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:04.771 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=668280 00:07:04.771 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 668280 /var/tmp/spdk.sock 00:07:04.772 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 668280 ']' 00:07:04.772 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.772 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.772 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.772 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.772 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.772 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:04.772 [2024-11-15 12:28:45.112790] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:04.772 [2024-11-15 12:28:45.112872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668280 ] 00:07:05.030 [2024-11-15 12:28:45.200362] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:05.030 [2024-11-15 12:28:45.200394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.030 [2024-11-15 12:28:45.252190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.030 [2024-11-15 12:28:45.252275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.030 [2024-11-15 12:28:45.252278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.288 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.288 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:05.288 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=668285 00:07:05.288 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 668285 /var/tmp/spdk2.sock 00:07:05.288 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 668285 ']' 00:07:05.288 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.288 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.288 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.288 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.288 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.288 12:28:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:05.288 [2024-11-15 12:28:45.516715] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:05.288 [2024-11-15 12:28:45.516788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668285 ] 00:07:05.546 [2024-11-15 12:28:45.643428] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:05.546 [2024-11-15 12:28:45.643457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.546 [2024-11-15 12:28:45.734742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.546 [2024-11-15 12:28:45.738366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.546 [2024-11-15 12:28:45.738368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.112 [2024-11-15 12:28:46.407394] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 668280 has claimed it. 00:07:06.112 request: 00:07:06.112 { 00:07:06.112 "method": "framework_enable_cpumask_locks", 00:07:06.112 "req_id": 1 00:07:06.112 } 00:07:06.112 Got JSON-RPC error response 00:07:06.112 response: 00:07:06.112 { 00:07:06.112 "code": -32603, 00:07:06.112 "message": "Failed to claim CPU core: 2" 00:07:06.112 } 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 668280 /var/tmp/spdk.sock 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 668280 ']' 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.112 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.370 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.370 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:06.370 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 668285 /var/tmp/spdk2.sock 00:07:06.370 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 668285 ']' 00:07:06.370 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.370 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.370 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.370 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.370 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.628 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.628 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:06.628 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:06.628 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:06.628 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:06.628 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:06.628 00:07:06.628 real 0m1.748s 00:07:06.628 user 0m0.809s 00:07:06.628 sys 0m0.170s 00:07:06.628 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.628 12:28:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.628 ************************************ 00:07:06.628 END TEST locking_overlapped_coremask_via_rpc 00:07:06.628 ************************************ 00:07:06.628 12:28:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:06.628 12:28:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 668280 ]] 00:07:06.628 12:28:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 668280 00:07:06.628 12:28:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 668280 ']' 00:07:06.628 12:28:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 668280 00:07:06.628 12:28:46 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:06.628 12:28:46 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.628 12:28:46 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 668280 00:07:06.628 12:28:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.628 12:28:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.628 12:28:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 668280' 00:07:06.628 killing process with pid 668280 00:07:06.628 12:28:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 668280 00:07:06.628 12:28:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 668280 00:07:07.195 12:28:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 668285 ]] 00:07:07.195 12:28:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 668285 00:07:07.195 12:28:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 668285 ']' 00:07:07.195 12:28:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 668285 00:07:07.195 12:28:47 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:07.195 12:28:47 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.195 12:28:47 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 668285 00:07:07.195 12:28:47 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:07.196 12:28:47 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:07.196 12:28:47 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 668285' 00:07:07.196 killing process with pid 668285 00:07:07.196 12:28:47 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 668285 00:07:07.196 12:28:47 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 668285 00:07:07.503 12:28:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:07.503 12:28:47 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:07.503 12:28:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 668280 ]] 00:07:07.503 12:28:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 668280 00:07:07.503 12:28:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 668280 ']' 00:07:07.503 12:28:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 668280 00:07:07.503 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (668280) - No such process 00:07:07.503 12:28:47 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 668280 is not found' 00:07:07.503 Process with pid 668280 is not found 00:07:07.503 12:28:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 668285 ]] 00:07:07.503 12:28:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 668285 00:07:07.503 12:28:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 668285 ']' 00:07:07.503 12:28:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 668285 00:07:07.503 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (668285) - No such process 00:07:07.503 12:28:47 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 668285 is not found' 00:07:07.503 Process with pid 668285 is not found 00:07:07.503 12:28:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:07.503 00:07:07.503 real 0m15.963s 00:07:07.503 user 0m26.368s 00:07:07.503 sys 0m6.181s 00:07:07.503 12:28:47 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.503 12:28:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.503 ************************************ 00:07:07.503 END TEST cpu_locks 00:07:07.503 ************************************ 00:07:07.503 00:07:07.503 real 0m41.140s 00:07:07.503 user 1m16.118s 00:07:07.503 sys 0m10.567s 00:07:07.503 12:28:47 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.503 12:28:47 event -- common/autotest_common.sh@10 -- # set +x 00:07:07.503 ************************************ 00:07:07.503 END TEST event 00:07:07.503 ************************************ 00:07:07.503 12:28:47 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:07:07.503 12:28:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.503 12:28:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.503 12:28:47 -- common/autotest_common.sh@10 -- # set +x 00:07:07.503 ************************************ 00:07:07.503 START TEST thread 00:07:07.503 ************************************ 00:07:07.503 12:28:47 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:07:07.782 * Looking for test storage... 00:07:07.782 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:07:07.782 12:28:47 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.782 12:28:47 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.782 12:28:47 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.782 12:28:47 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.782 12:28:47 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.782 12:28:47 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.782 12:28:47 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.782 12:28:47 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.782 12:28:47 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.782 12:28:47 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.782 12:28:47 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.782 12:28:47 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.782 12:28:47 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.782 12:28:47 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.782 12:28:47 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.782 12:28:47 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:07.782 12:28:47 thread -- scripts/common.sh@345 -- # : 1 00:07:07.782 12:28:47 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.782 12:28:47 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.782 12:28:47 thread -- scripts/common.sh@365 -- # decimal 1 00:07:07.782 12:28:47 thread -- scripts/common.sh@353 -- # local d=1 00:07:07.782 12:28:47 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.782 12:28:47 thread -- scripts/common.sh@355 -- # echo 1 00:07:07.782 12:28:47 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.782 12:28:47 thread -- scripts/common.sh@366 -- # decimal 2 00:07:07.782 12:28:47 thread -- scripts/common.sh@353 -- # local d=2 00:07:07.782 12:28:47 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.782 12:28:47 thread -- scripts/common.sh@355 -- # echo 2 00:07:07.782 12:28:47 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.782 12:28:47 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.782 12:28:47 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.782 12:28:47 thread -- scripts/common.sh@368 -- # return 0 00:07:07.782 12:28:47 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.782 12:28:47 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.782 --rc genhtml_branch_coverage=1 00:07:07.782 --rc genhtml_function_coverage=1 00:07:07.782 --rc genhtml_legend=1 00:07:07.782 --rc geninfo_all_blocks=1 00:07:07.782 --rc geninfo_unexecuted_blocks=1 00:07:07.782 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:07.782 ' 00:07:07.782 12:28:47 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.782 --rc genhtml_branch_coverage=1 00:07:07.782 --rc genhtml_function_coverage=1 00:07:07.782 --rc genhtml_legend=1 00:07:07.782 --rc geninfo_all_blocks=1 00:07:07.782 --rc geninfo_unexecuted_blocks=1 00:07:07.782 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:07.782 ' 00:07:07.782 12:28:47 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.782 --rc genhtml_branch_coverage=1 00:07:07.782 --rc genhtml_function_coverage=1 00:07:07.782 --rc genhtml_legend=1 00:07:07.782 --rc geninfo_all_blocks=1 00:07:07.782 --rc geninfo_unexecuted_blocks=1 00:07:07.782 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:07.782 ' 00:07:07.782 12:28:47 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.782 --rc genhtml_branch_coverage=1 00:07:07.782 --rc genhtml_function_coverage=1 00:07:07.782 --rc genhtml_legend=1 00:07:07.782 --rc geninfo_all_blocks=1 00:07:07.782 --rc geninfo_unexecuted_blocks=1 00:07:07.782 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:07.782 ' 00:07:07.782 12:28:47 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:07.782 12:28:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:07.782 12:28:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.782 12:28:47 thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.782 ************************************ 00:07:07.782 START TEST thread_poller_perf 00:07:07.782 ************************************ 00:07:07.782 12:28:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:07.782 [2024-11-15 12:28:48.007549] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:07.782 [2024-11-15 12:28:48.007633] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668738 ] 00:07:07.783 [2024-11-15 12:28:48.096478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.041 [2024-11-15 12:28:48.146739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.041 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:08.976 [2024-11-15T11:28:49.320Z] ====================================== 00:07:08.976 [2024-11-15T11:28:49.320Z] busy:2305314236 (cyc) 00:07:08.976 [2024-11-15T11:28:49.320Z] total_run_count: 828000 00:07:08.976 [2024-11-15T11:28:49.320Z] tsc_hz: 2300000000 (cyc) 00:07:08.976 [2024-11-15T11:28:49.320Z] ====================================== 00:07:08.976 [2024-11-15T11:28:49.320Z] poller_cost: 2784 (cyc), 1210 (nsec) 00:07:08.976 00:07:08.976 real 0m1.203s 00:07:08.976 user 0m1.106s 00:07:08.976 sys 0m0.092s 00:07:08.976 12:28:49 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.976 12:28:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:08.976 ************************************ 00:07:08.976 END TEST thread_poller_perf 00:07:08.976 ************************************ 00:07:08.976 12:28:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:08.976 12:28:49 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:08.976 12:28:49 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.976 12:28:49 thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.976 ************************************ 00:07:08.976 START TEST thread_poller_perf 00:07:08.976 ************************************ 00:07:08.976 12:28:49 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:08.976 [2024-11-15 12:28:49.296959] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:08.977 [2024-11-15 12:28:49.297044] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668937 ] 00:07:09.235 [2024-11-15 12:28:49.387445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.235 [2024-11-15 12:28:49.436052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.235 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:10.169 [2024-11-15T11:28:50.513Z] ====================================== 00:07:10.169 [2024-11-15T11:28:50.513Z] busy:2301407376 (cyc) 00:07:10.169 [2024-11-15T11:28:50.513Z] total_run_count: 12532000 00:07:10.169 [2024-11-15T11:28:50.513Z] tsc_hz: 2300000000 (cyc) 00:07:10.169 [2024-11-15T11:28:50.513Z] ====================================== 00:07:10.169 [2024-11-15T11:28:50.513Z] poller_cost: 183 (cyc), 79 (nsec) 00:07:10.169 00:07:10.169 real 0m1.204s 00:07:10.169 user 0m1.104s 00:07:10.169 sys 0m0.095s 00:07:10.169 12:28:50 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.169 12:28:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:10.169 ************************************ 00:07:10.169 END TEST thread_poller_perf 00:07:10.169 ************************************ 00:07:10.427 12:28:50 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:07:10.427 12:28:50 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:10.427 12:28:50 thread -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.427 12:28:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.427 12:28:50 thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.427 ************************************ 00:07:10.427 START TEST thread_spdk_lock 00:07:10.427 ************************************ 00:07:10.427 12:28:50 thread.thread_spdk_lock -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:10.427 [2024-11-15 12:28:50.581877] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:10.427 [2024-11-15 12:28:50.581967] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid669132 ] 00:07:10.427 [2024-11-15 12:28:50.672243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.427 [2024-11-15 12:28:50.723296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.427 [2024-11-15 12:28:50.723298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.993 [2024-11-15 12:28:51.217782] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:10.993 [2024-11-15 12:28:51.217820] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3112:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:10.993 [2024-11-15 12:28:51.217831] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x14d2c80 00:07:10.993 [2024-11-15 12:28:51.218595] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:10.993 [2024-11-15 12:28:51.218700] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:10.993 [2024-11-15 12:28:51.218720] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:10.993 Starting test contend 00:07:10.993 Worker Delay Wait us Hold us Total us 00:07:10.993 0 3 168080 187969 356050 00:07:10.993 1 5 82620 289887 372507 00:07:10.993 PASS test contend 00:07:10.993 Starting test hold_by_poller 00:07:10.993 PASS test hold_by_poller 00:07:10.993 Starting test hold_by_message 00:07:10.993 PASS test hold_by_message 00:07:10.993 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:07:10.993 100014 assertions passed 00:07:10.993 0 assertions failed 00:07:10.993 00:07:10.993 real 0m0.692s 00:07:10.993 user 0m1.086s 00:07:10.993 sys 0m0.097s 00:07:10.993 12:28:51 thread.thread_spdk_lock -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.993 12:28:51 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:07:10.993 ************************************ 00:07:10.993 END TEST thread_spdk_lock 00:07:10.993 ************************************ 00:07:10.993 00:07:10.993 real 0m3.534s 00:07:10.993 user 0m3.497s 00:07:10.993 sys 0m0.552s 00:07:10.993 12:28:51 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.993 12:28:51 thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.993 ************************************ 00:07:10.993 END TEST thread 00:07:10.993 ************************************ 00:07:11.252 12:28:51 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:11.252 12:28:51 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:11.252 12:28:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.252 12:28:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.252 12:28:51 -- common/autotest_common.sh@10 -- # set +x 00:07:11.252 ************************************ 00:07:11.252 START TEST app_cmdline 00:07:11.252 ************************************ 00:07:11.252 12:28:51 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:11.252 * Looking for test storage... 00:07:11.252 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:11.252 12:28:51 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:11.252 12:28:51 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:11.252 12:28:51 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:11.252 12:28:51 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.252 12:28:51 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:11.252 12:28:51 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.252 12:28:51 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:11.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.252 --rc genhtml_branch_coverage=1 00:07:11.252 --rc genhtml_function_coverage=1 00:07:11.252 --rc genhtml_legend=1 00:07:11.252 --rc geninfo_all_blocks=1 00:07:11.252 --rc geninfo_unexecuted_blocks=1 00:07:11.252 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:11.252 ' 00:07:11.252 12:28:51 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:11.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.252 --rc genhtml_branch_coverage=1 00:07:11.252 --rc genhtml_function_coverage=1 00:07:11.252 --rc genhtml_legend=1 00:07:11.252 --rc geninfo_all_blocks=1 00:07:11.252 --rc geninfo_unexecuted_blocks=1 00:07:11.252 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:11.252 ' 00:07:11.252 12:28:51 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.252 --rc genhtml_branch_coverage=1 00:07:11.252 --rc genhtml_function_coverage=1 00:07:11.252 --rc genhtml_legend=1 00:07:11.252 --rc geninfo_all_blocks=1 00:07:11.252 --rc geninfo_unexecuted_blocks=1 00:07:11.252 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:11.252 ' 00:07:11.252 12:28:51 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.252 --rc genhtml_branch_coverage=1 00:07:11.252 --rc genhtml_function_coverage=1 00:07:11.252 --rc genhtml_legend=1 00:07:11.252 --rc geninfo_all_blocks=1 00:07:11.253 --rc geninfo_unexecuted_blocks=1 00:07:11.253 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:11.253 ' 00:07:11.253 12:28:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:11.253 12:28:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=669374 00:07:11.253 12:28:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 669374 00:07:11.253 12:28:51 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:11.253 12:28:51 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 669374 ']' 00:07:11.253 12:28:51 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.253 12:28:51 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.253 12:28:51 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.253 12:28:51 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.253 12:28:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:11.510 [2024-11-15 12:28:51.597491] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:11.510 [2024-11-15 12:28:51.597566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid669374 ] 00:07:11.510 [2024-11-15 12:28:51.679615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.510 [2024-11-15 12:28:51.727662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.767 12:28:51 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.767 12:28:51 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:11.767 12:28:51 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:12.024 { 00:07:12.024 "version": "SPDK v25.01-pre git sha1 c46ddd981", 00:07:12.024 "fields": { 00:07:12.024 "major": 25, 00:07:12.024 "minor": 1, 00:07:12.024 "patch": 0, 00:07:12.024 "suffix": "-pre", 00:07:12.024 "commit": "c46ddd981" 00:07:12.024 } 00:07:12.024 } 00:07:12.024 12:28:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:12.024 12:28:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:12.024 12:28:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:12.024 12:28:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:12.024 12:28:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:12.024 12:28:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:12.024 12:28:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:12.024 12:28:52 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.024 12:28:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:12.024 12:28:52 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.024 12:28:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:12.024 12:28:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:12.024 12:28:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.024 12:28:52 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:12.024 12:28:52 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.024 12:28:52 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:12.024 12:28:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.024 12:28:52 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:12.024 12:28:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.024 12:28:52 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:12.024 12:28:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.024 12:28:52 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:12.024 12:28:52 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:07:12.024 12:28:52 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.024 request: 00:07:12.024 { 00:07:12.024 "method": "env_dpdk_get_mem_stats", 00:07:12.024 "req_id": 1 00:07:12.024 } 00:07:12.024 Got JSON-RPC error response 00:07:12.024 response: 00:07:12.024 { 00:07:12.024 "code": -32601, 00:07:12.024 "message": "Method not found" 00:07:12.024 } 00:07:12.283 12:28:52 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:12.283 12:28:52 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:12.283 12:28:52 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:12.283 12:28:52 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:12.283 12:28:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 669374 00:07:12.283 12:28:52 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 669374 ']' 00:07:12.283 12:28:52 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 669374 00:07:12.283 12:28:52 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:12.283 12:28:52 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.283 12:28:52 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 669374 00:07:12.283 12:28:52 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.283 12:28:52 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.283 12:28:52 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 669374' 00:07:12.283 killing process with pid 669374 00:07:12.283 12:28:52 app_cmdline -- common/autotest_common.sh@973 -- # kill 669374 00:07:12.283 12:28:52 app_cmdline -- common/autotest_common.sh@978 -- # wait 669374 00:07:12.541 00:07:12.542 real 0m1.384s 00:07:12.542 user 0m1.545s 00:07:12.542 sys 0m0.516s 00:07:12.542 12:28:52 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.542 12:28:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:12.542 ************************************ 00:07:12.542 END TEST app_cmdline 00:07:12.542 ************************************ 00:07:12.542 12:28:52 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:12.542 12:28:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.542 12:28:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.542 12:28:52 -- common/autotest_common.sh@10 -- # set +x 00:07:12.542 ************************************ 00:07:12.542 START TEST version 00:07:12.542 ************************************ 00:07:12.542 12:28:52 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:12.800 * Looking for test storage... 00:07:12.800 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:12.800 12:28:52 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:12.800 12:28:52 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:12.800 12:28:52 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:12.801 12:28:53 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:12.801 12:28:53 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.801 12:28:53 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.801 12:28:53 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.801 12:28:53 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.801 12:28:53 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.801 12:28:53 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.801 12:28:53 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.801 12:28:53 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.801 12:28:53 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.801 12:28:53 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.801 12:28:53 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.801 12:28:53 version -- scripts/common.sh@344 -- # case "$op" in 00:07:12.801 12:28:53 version -- scripts/common.sh@345 -- # : 1 00:07:12.801 12:28:53 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.801 12:28:53 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.801 12:28:53 version -- scripts/common.sh@365 -- # decimal 1 00:07:12.801 12:28:53 version -- scripts/common.sh@353 -- # local d=1 00:07:12.801 12:28:53 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.801 12:28:53 version -- scripts/common.sh@355 -- # echo 1 00:07:12.801 12:28:53 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.801 12:28:53 version -- scripts/common.sh@366 -- # decimal 2 00:07:12.801 12:28:53 version -- scripts/common.sh@353 -- # local d=2 00:07:12.801 12:28:53 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.801 12:28:53 version -- scripts/common.sh@355 -- # echo 2 00:07:12.801 12:28:53 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.801 12:28:53 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.801 12:28:53 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.801 12:28:53 version -- scripts/common.sh@368 -- # return 0 00:07:12.801 12:28:53 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.801 12:28:53 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:12.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.801 --rc genhtml_branch_coverage=1 00:07:12.801 --rc genhtml_function_coverage=1 00:07:12.801 --rc genhtml_legend=1 00:07:12.801 --rc geninfo_all_blocks=1 00:07:12.801 --rc geninfo_unexecuted_blocks=1 00:07:12.801 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:12.801 ' 00:07:12.801 12:28:53 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:12.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.801 --rc genhtml_branch_coverage=1 00:07:12.801 --rc genhtml_function_coverage=1 00:07:12.801 --rc genhtml_legend=1 00:07:12.801 --rc geninfo_all_blocks=1 00:07:12.801 --rc geninfo_unexecuted_blocks=1 00:07:12.801 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:12.801 ' 00:07:12.801 12:28:53 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:12.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.801 --rc genhtml_branch_coverage=1 00:07:12.801 --rc genhtml_function_coverage=1 00:07:12.801 --rc genhtml_legend=1 00:07:12.801 --rc geninfo_all_blocks=1 00:07:12.801 --rc geninfo_unexecuted_blocks=1 00:07:12.801 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:12.801 ' 00:07:12.801 12:28:53 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:12.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.801 --rc genhtml_branch_coverage=1 00:07:12.801 --rc genhtml_function_coverage=1 00:07:12.801 --rc genhtml_legend=1 00:07:12.801 --rc geninfo_all_blocks=1 00:07:12.801 --rc geninfo_unexecuted_blocks=1 00:07:12.801 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:12.801 ' 00:07:12.801 12:28:53 version -- app/version.sh@17 -- # get_header_version major 00:07:12.801 12:28:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:12.801 12:28:53 version -- app/version.sh@14 -- # cut -f2 00:07:12.801 12:28:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.801 12:28:53 version -- app/version.sh@17 -- # major=25 00:07:12.801 12:28:53 version -- app/version.sh@18 -- # get_header_version minor 00:07:12.801 12:28:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:12.801 12:28:53 version -- app/version.sh@14 -- # cut -f2 00:07:12.801 12:28:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.801 12:28:53 version -- app/version.sh@18 -- # minor=1 00:07:12.801 12:28:53 version -- app/version.sh@19 -- # get_header_version patch 00:07:12.801 12:28:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:12.801 12:28:53 version -- app/version.sh@14 -- # cut -f2 00:07:12.801 12:28:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.801 12:28:53 version -- app/version.sh@19 -- # patch=0 00:07:12.801 12:28:53 version -- app/version.sh@20 -- # get_header_version suffix 00:07:12.801 12:28:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:12.801 12:28:53 version -- app/version.sh@14 -- # cut -f2 00:07:12.801 12:28:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.801 12:28:53 version -- app/version.sh@20 -- # suffix=-pre 00:07:12.801 12:28:53 version -- app/version.sh@22 -- # version=25.1 00:07:12.801 12:28:53 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:12.801 12:28:53 version -- app/version.sh@28 -- # version=25.1rc0 00:07:12.801 12:28:53 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:12.801 12:28:53 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:12.801 12:28:53 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:12.801 12:28:53 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:12.801 00:07:12.801 real 0m0.249s 00:07:12.801 user 0m0.141s 00:07:12.801 sys 0m0.156s 00:07:12.801 12:28:53 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.801 12:28:53 version -- common/autotest_common.sh@10 -- # set +x 00:07:12.801 ************************************ 00:07:12.801 END TEST version 00:07:12.801 ************************************ 00:07:12.801 12:28:53 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:12.801 12:28:53 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:12.801 12:28:53 -- spdk/autotest.sh@194 -- # uname -s 00:07:13.059 12:28:53 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:13.059 12:28:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:13.059 12:28:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:13.059 12:28:53 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:13.059 12:28:53 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:13.059 12:28:53 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:13.059 12:28:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:13.059 12:28:53 -- common/autotest_common.sh@10 -- # set +x 00:07:13.059 12:28:53 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:13.059 12:28:53 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:13.059 12:28:53 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:07:13.059 12:28:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:07:13.059 12:28:53 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:07:13.059 12:28:53 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:07:13.059 12:28:53 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:07:13.059 12:28:53 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:07:13.059 12:28:53 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:07:13.059 12:28:53 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:07:13.059 12:28:53 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:07:13.059 12:28:53 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:07:13.059 12:28:53 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:07:13.059 12:28:53 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:07:13.059 12:28:53 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:07:13.059 12:28:53 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:07:13.059 12:28:53 -- spdk/autotest.sh@374 -- # [[ 1 -eq 1 ]] 00:07:13.059 12:28:53 -- spdk/autotest.sh@375 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:13.059 12:28:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.059 12:28:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.059 12:28:53 -- common/autotest_common.sh@10 -- # set +x 00:07:13.059 ************************************ 00:07:13.059 START TEST llvm_fuzz 00:07:13.059 ************************************ 00:07:13.059 12:28:53 llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:13.059 * Looking for test storage... 00:07:13.059 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:07:13.059 12:28:53 llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:13.059 12:28:53 llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:07:13.059 12:28:53 llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:13.059 12:28:53 llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:13.059 12:28:53 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.059 12:28:53 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.317 12:28:53 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:13.317 12:28:53 llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.317 12:28:53 llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:13.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.317 --rc genhtml_branch_coverage=1 00:07:13.317 --rc genhtml_function_coverage=1 00:07:13.317 --rc genhtml_legend=1 00:07:13.317 --rc geninfo_all_blocks=1 00:07:13.317 --rc geninfo_unexecuted_blocks=1 00:07:13.318 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:13.318 ' 00:07:13.318 12:28:53 llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:13.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.318 --rc genhtml_branch_coverage=1 00:07:13.318 --rc genhtml_function_coverage=1 00:07:13.318 --rc genhtml_legend=1 00:07:13.318 --rc geninfo_all_blocks=1 00:07:13.318 --rc geninfo_unexecuted_blocks=1 00:07:13.318 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:13.318 ' 00:07:13.318 12:28:53 llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:13.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.318 --rc genhtml_branch_coverage=1 00:07:13.318 --rc genhtml_function_coverage=1 00:07:13.318 --rc genhtml_legend=1 00:07:13.318 --rc geninfo_all_blocks=1 00:07:13.318 --rc geninfo_unexecuted_blocks=1 00:07:13.318 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:13.318 ' 00:07:13.318 12:28:53 llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:13.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.318 --rc genhtml_branch_coverage=1 00:07:13.318 --rc genhtml_function_coverage=1 00:07:13.318 --rc genhtml_legend=1 00:07:13.318 --rc geninfo_all_blocks=1 00:07:13.318 --rc geninfo_unexecuted_blocks=1 00:07:13.318 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:13.318 ' 00:07:13.318 12:28:53 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:07:13.318 12:28:53 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:07:13.318 12:28:53 llvm_fuzz -- common/autotest_common.sh@550 -- # fuzzers=() 00:07:13.318 12:28:53 llvm_fuzz -- common/autotest_common.sh@550 -- # local fuzzers 00:07:13.318 12:28:53 llvm_fuzz -- common/autotest_common.sh@552 -- # [[ -n '' ]] 00:07:13.318 12:28:53 llvm_fuzz -- common/autotest_common.sh@555 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:07:13.318 12:28:53 llvm_fuzz -- common/autotest_common.sh@556 -- # fuzzers=("${fuzzers[@]##*/}") 00:07:13.318 12:28:53 llvm_fuzz -- common/autotest_common.sh@559 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:07:13.318 12:28:53 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:13.318 12:28:53 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:13.318 12:28:53 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:13.318 12:28:53 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:13.318 12:28:53 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:13.318 12:28:53 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:13.318 12:28:53 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:13.318 12:28:53 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:13.318 12:28:53 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:13.318 12:28:53 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.318 12:28:53 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.318 12:28:53 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:13.318 ************************************ 00:07:13.318 START TEST nvmf_llvm_fuzz 00:07:13.318 ************************************ 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:13.318 * Looking for test storage... 00:07:13.318 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.318 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:13.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.581 --rc genhtml_branch_coverage=1 00:07:13.581 --rc genhtml_function_coverage=1 00:07:13.581 --rc genhtml_legend=1 00:07:13.581 --rc geninfo_all_blocks=1 00:07:13.581 --rc geninfo_unexecuted_blocks=1 00:07:13.581 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:13.581 ' 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:13.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.581 --rc genhtml_branch_coverage=1 00:07:13.581 --rc genhtml_function_coverage=1 00:07:13.581 --rc genhtml_legend=1 00:07:13.581 --rc geninfo_all_blocks=1 00:07:13.581 --rc geninfo_unexecuted_blocks=1 00:07:13.581 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:13.581 ' 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:13.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.581 --rc genhtml_branch_coverage=1 00:07:13.581 --rc genhtml_function_coverage=1 00:07:13.581 --rc genhtml_legend=1 00:07:13.581 --rc geninfo_all_blocks=1 00:07:13.581 --rc geninfo_unexecuted_blocks=1 00:07:13.581 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:13.581 ' 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:13.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.581 --rc genhtml_branch_coverage=1 00:07:13.581 --rc genhtml_function_coverage=1 00:07:13.581 --rc genhtml_legend=1 00:07:13.581 --rc geninfo_all_blocks=1 00:07:13.581 --rc geninfo_unexecuted_blocks=1 00:07:13.581 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:13.581 ' 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:13.581 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:13.582 #define SPDK_CONFIG_H 00:07:13.582 #define SPDK_CONFIG_AIO_FSDEV 1 00:07:13.582 #define SPDK_CONFIG_APPS 1 00:07:13.582 #define SPDK_CONFIG_ARCH native 00:07:13.582 #undef SPDK_CONFIG_ASAN 00:07:13.582 #undef SPDK_CONFIG_AVAHI 00:07:13.582 #undef SPDK_CONFIG_CET 00:07:13.582 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:07:13.582 #define SPDK_CONFIG_COVERAGE 1 00:07:13.582 #define SPDK_CONFIG_CROSS_PREFIX 00:07:13.582 #undef SPDK_CONFIG_CRYPTO 00:07:13.582 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:13.582 #undef SPDK_CONFIG_CUSTOMOCF 00:07:13.582 #undef SPDK_CONFIG_DAOS 00:07:13.582 #define SPDK_CONFIG_DAOS_DIR 00:07:13.582 #define SPDK_CONFIG_DEBUG 1 00:07:13.582 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:13.582 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:13.582 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:13.582 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:13.582 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:13.582 #undef SPDK_CONFIG_DPDK_UADK 00:07:13.582 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:13.582 #define SPDK_CONFIG_EXAMPLES 1 00:07:13.582 #undef SPDK_CONFIG_FC 00:07:13.582 #define SPDK_CONFIG_FC_PATH 00:07:13.582 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:13.582 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:13.582 #define SPDK_CONFIG_FSDEV 1 00:07:13.582 #undef SPDK_CONFIG_FUSE 00:07:13.582 #define SPDK_CONFIG_FUZZER 1 00:07:13.582 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:13.582 #undef SPDK_CONFIG_GOLANG 00:07:13.582 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:13.582 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:13.582 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:13.582 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:13.582 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:13.582 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:13.582 #undef SPDK_CONFIG_HAVE_LZ4 00:07:13.582 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:07:13.582 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:07:13.582 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:13.582 #define SPDK_CONFIG_IDXD 1 00:07:13.582 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:13.582 #undef SPDK_CONFIG_IPSEC_MB 00:07:13.582 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:13.582 #define SPDK_CONFIG_ISAL 1 00:07:13.582 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:13.582 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:13.582 #define SPDK_CONFIG_LIBDIR 00:07:13.582 #undef SPDK_CONFIG_LTO 00:07:13.582 #define SPDK_CONFIG_MAX_LCORES 128 00:07:13.582 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:07:13.582 #define SPDK_CONFIG_NVME_CUSE 1 00:07:13.582 #undef SPDK_CONFIG_OCF 00:07:13.582 #define SPDK_CONFIG_OCF_PATH 00:07:13.582 #define SPDK_CONFIG_OPENSSL_PATH 00:07:13.582 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:13.582 #define SPDK_CONFIG_PGO_DIR 00:07:13.582 #undef SPDK_CONFIG_PGO_USE 00:07:13.582 #define SPDK_CONFIG_PREFIX /usr/local 00:07:13.582 #undef SPDK_CONFIG_RAID5F 00:07:13.582 #undef SPDK_CONFIG_RBD 00:07:13.582 #define SPDK_CONFIG_RDMA 1 00:07:13.582 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:13.582 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:13.582 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:13.582 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:13.582 #undef SPDK_CONFIG_SHARED 00:07:13.582 #undef SPDK_CONFIG_SMA 00:07:13.582 #define SPDK_CONFIG_TESTS 1 00:07:13.582 #undef SPDK_CONFIG_TSAN 00:07:13.582 #define SPDK_CONFIG_UBLK 1 00:07:13.582 #define SPDK_CONFIG_UBSAN 1 00:07:13.582 #undef SPDK_CONFIG_UNIT_TESTS 00:07:13.582 #undef SPDK_CONFIG_URING 00:07:13.582 #define SPDK_CONFIG_URING_PATH 00:07:13.582 #undef SPDK_CONFIG_URING_ZNS 00:07:13.582 #undef SPDK_CONFIG_USDT 00:07:13.582 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:13.582 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:13.582 #define SPDK_CONFIG_VFIO_USER 1 00:07:13.582 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:13.582 #define SPDK_CONFIG_VHOST 1 00:07:13.582 #define SPDK_CONFIG_VIRTIO 1 00:07:13.582 #undef SPDK_CONFIG_VTUNE 00:07:13.582 #define SPDK_CONFIG_VTUNE_DIR 00:07:13.582 #define SPDK_CONFIG_WERROR 1 00:07:13.582 #define SPDK_CONFIG_WPDK_DIR 00:07:13.582 #undef SPDK_CONFIG_XNVME 00:07:13.582 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.582 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:07:13.583 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:13.584 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j72 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 669728 ]] 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 669728 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.Mwf7kL 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.Mwf7kL/tests/nvmf /tmp/spdk.Mwf7kL 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=86454722560 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=94500274176 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=8045551616 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=47246708736 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=47250137088 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=18893950976 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=18900058112 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=6107136 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=47249653760 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=47250137088 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=483328 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=9450012672 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=9450024960 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:07:13.585 * Looking for test storage... 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=86454722560 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:07:13.585 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=10260144128 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:13.586 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # set -o errtrace 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1685 -- # true 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1687 -- # xtrace_fd 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:13.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.586 --rc genhtml_branch_coverage=1 00:07:13.586 --rc genhtml_function_coverage=1 00:07:13.586 --rc genhtml_legend=1 00:07:13.586 --rc geninfo_all_blocks=1 00:07:13.586 --rc geninfo_unexecuted_blocks=1 00:07:13.586 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:13.586 ' 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:13.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.586 --rc genhtml_branch_coverage=1 00:07:13.586 --rc genhtml_function_coverage=1 00:07:13.586 --rc genhtml_legend=1 00:07:13.586 --rc geninfo_all_blocks=1 00:07:13.586 --rc geninfo_unexecuted_blocks=1 00:07:13.586 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:13.586 ' 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:13.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.586 --rc genhtml_branch_coverage=1 00:07:13.586 --rc genhtml_function_coverage=1 00:07:13.586 --rc genhtml_legend=1 00:07:13.586 --rc geninfo_all_blocks=1 00:07:13.586 --rc geninfo_unexecuted_blocks=1 00:07:13.586 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:13.586 ' 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:13.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.586 --rc genhtml_branch_coverage=1 00:07:13.586 --rc genhtml_function_coverage=1 00:07:13.586 --rc genhtml_legend=1 00:07:13.586 --rc geninfo_all_blocks=1 00:07:13.586 --rc geninfo_unexecuted_blocks=1 00:07:13.586 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:13.586 ' 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:13.586 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:13.845 12:28:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:07:13.845 [2024-11-15 12:28:53.977013] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:13.845 [2024-11-15 12:28:53.977096] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid669947 ] 00:07:14.103 [2024-11-15 12:28:54.319157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.103 [2024-11-15 12:28:54.379043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.103 [2024-11-15 12:28:54.438404] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.361 [2024-11-15 12:28:54.454666] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:07:14.361 INFO: Running with entropic power schedule (0xFF, 100). 00:07:14.361 INFO: Seed: 21942401 00:07:14.361 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:07:14.361 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:07:14.361 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:14.361 INFO: A corpus is not provided, starting from an empty corpus 00:07:14.361 #2 INITED exec/s: 0 rss: 66Mb 00:07:14.361 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:14.361 This may also happen if the target rejected all inputs we tried so far 00:07:14.361 [2024-11-15 12:28:54.510135] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:14.361 [2024-11-15 12:28:54.510168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.618 NEW_FUNC[1/715]: 0x43bbc8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:07:14.618 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:14.618 #23 NEW cov: 12189 ft: 12186 corp: 2/126b lim: 320 exec/s: 0 rss: 73Mb L: 125/125 MS: 1 InsertRepeatedBytes- 00:07:14.618 [2024-11-15 12:28:54.840949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d8) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:07:14.618 [2024-11-15 12:28:54.840988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.618 #41 NEW cov: 12325 ft: 12818 corp: 3/248b lim: 320 exec/s: 0 rss: 74Mb L: 122/125 MS: 3 ChangeByte-ShuffleBytes-CrossOver- 00:07:14.618 [2024-11-15 12:28:54.881037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d8) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:07:14.618 [2024-11-15 12:28:54.881065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.618 #47 NEW cov: 12331 ft: 13188 corp: 4/370b lim: 320 exec/s: 0 rss: 74Mb L: 122/125 MS: 1 ShuffleBytes- 00:07:14.618 [2024-11-15 12:28:54.941432] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:14.618 [2024-11-15 12:28:54.941460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.618 [2024-11-15 12:28:54.941537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:14.618 [2024-11-15 12:28:54.941552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.618 [2024-11-15 12:28:54.941613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:14.618 [2024-11-15 12:28:54.941627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.877 NEW_FUNC[1/1]: 0x1530678 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2213 00:07:14.877 #48 NEW cov: 12447 ft: 13696 corp: 5/599b lim: 320 exec/s: 0 rss: 74Mb L: 229/229 MS: 1 CopyPart- 00:07:14.877 [2024-11-15 12:28:55.001282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d8) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:07:14.877 [2024-11-15 12:28:55.001307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.877 #49 NEW cov: 12447 ft: 13749 corp: 6/721b lim: 320 exec/s: 0 rss: 74Mb L: 122/229 MS: 1 ChangeBinInt- 00:07:14.877 [2024-11-15 12:28:55.061544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:edededed cdw10:edededed cdw11:edededed SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.877 [2024-11-15 12:28:55.061570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.877 NEW_FUNC[1/1]: 0x1966ff8 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:07:14.877 #51 NEW cov: 12460 ft: 14158 corp: 7/822b lim: 320 exec/s: 0 rss: 74Mb L: 101/229 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:14.877 [2024-11-15 12:28:55.101615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:edededed cdw10:edededed cdw11:edededed SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.877 [2024-11-15 12:28:55.101640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.877 #52 NEW cov: 12460 ft: 14321 corp: 8/937b lim: 320 exec/s: 0 rss: 74Mb L: 115/229 MS: 1 CrossOver- 00:07:14.877 [2024-11-15 12:28:55.161998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d8) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:07:14.877 [2024-11-15 12:28:55.162025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.877 [2024-11-15 12:28:55.162101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:14.877 [2024-11-15 12:28:55.162116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.877 [2024-11-15 12:28:55.162180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:14.877 [2024-11-15 12:28:55.162194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.877 #53 NEW cov: 12460 ft: 14443 corp: 9/1177b lim: 320 exec/s: 0 rss: 74Mb L: 240/240 MS: 1 CopyPart- 00:07:14.877 [2024-11-15 12:28:55.201895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d8) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:07:14.877 [2024-11-15 12:28:55.201919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.135 #54 NEW cov: 12460 ft: 14511 corp: 10/1299b lim: 320 exec/s: 0 rss: 74Mb L: 122/240 MS: 1 ChangeBit- 00:07:15.135 [2024-11-15 12:28:55.242198] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:15.135 [2024-11-15 12:28:55.242224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.135 [2024-11-15 12:28:55.242305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:15.135 [2024-11-15 12:28:55.242327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.135 [2024-11-15 12:28:55.242404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:15.135 [2024-11-15 12:28:55.242420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.135 #55 NEW cov: 12460 ft: 14548 corp: 11/1528b lim: 320 exec/s: 0 rss: 74Mb L: 229/240 MS: 1 CrossOver- 00:07:15.135 [2024-11-15 12:28:55.302205] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:15.135 [2024-11-15 12:28:55.302230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.135 #56 NEW cov: 12460 ft: 14576 corp: 12/1653b lim: 320 exec/s: 0 rss: 74Mb L: 125/240 MS: 1 ShuffleBytes- 00:07:15.135 [2024-11-15 12:28:55.342291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:edededed cdw10:edededed cdw11:edededed SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.135 [2024-11-15 12:28:55.342321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.135 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:15.135 #57 NEW cov: 12483 ft: 14650 corp: 13/1726b lim: 320 exec/s: 0 rss: 74Mb L: 73/240 MS: 1 EraseBytes- 00:07:15.135 [2024-11-15 12:28:55.402465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d8) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:07:15.135 [2024-11-15 12:28:55.402494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.135 #58 NEW cov: 12483 ft: 14656 corp: 14/1848b lim: 320 exec/s: 0 rss: 74Mb L: 122/240 MS: 1 CrossOver- 00:07:15.135 [2024-11-15 12:28:55.442574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d8) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:07:15.135 [2024-11-15 12:28:55.442599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.393 #59 NEW cov: 12483 ft: 14669 corp: 15/1970b lim: 320 exec/s: 0 rss: 74Mb L: 122/240 MS: 1 ChangeBinInt- 00:07:15.393 [2024-11-15 12:28:55.502816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:edededed cdw10:edededed cdw11:edededed SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.393 [2024-11-15 12:28:55.502842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.393 #65 NEW cov: 12483 ft: 14689 corp: 16/2085b lim: 320 exec/s: 65 rss: 74Mb L: 115/240 MS: 1 ShuffleBytes- 00:07:15.393 [2024-11-15 12:28:55.542903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:edededed cdw10:edededed cdw11:edededed SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.393 [2024-11-15 12:28:55.542929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.393 #66 NEW cov: 12483 ft: 14766 corp: 17/2200b lim: 320 exec/s: 66 rss: 74Mb L: 115/240 MS: 1 ShuffleBytes- 00:07:15.393 [2024-11-15 12:28:55.603241] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:15.393 [2024-11-15 12:28:55.603266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.393 [2024-11-15 12:28:55.603360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:15.393 [2024-11-15 12:28:55.603376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.393 [2024-11-15 12:28:55.603438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:15.393 [2024-11-15 12:28:55.603452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.393 #67 NEW cov: 12483 ft: 14783 corp: 18/2430b lim: 320 exec/s: 67 rss: 75Mb L: 230/240 MS: 1 InsertByte- 00:07:15.393 [2024-11-15 12:28:55.643112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d8) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:07:15.393 [2024-11-15 12:28:55.643137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.393 #68 NEW cov: 12483 ft: 14800 corp: 19/2552b lim: 320 exec/s: 68 rss: 75Mb L: 122/240 MS: 1 ShuffleBytes- 00:07:15.393 [2024-11-15 12:28:55.703358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:edededed cdw10:edededed cdw11:edededed SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.393 [2024-11-15 12:28:55.703383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.393 #69 NEW cov: 12483 ft: 14804 corp: 20/2667b lim: 320 exec/s: 69 rss: 75Mb L: 115/240 MS: 1 ChangeByte- 00:07:15.652 [2024-11-15 12:28:55.743453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d8) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:07:15.652 [2024-11-15 12:28:55.743478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.652 #70 NEW cov: 12483 ft: 14849 corp: 21/2789b lim: 320 exec/s: 70 rss: 75Mb L: 122/240 MS: 1 ChangeBinInt- 00:07:15.652 [2024-11-15 12:28:55.783510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d8) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:07:15.652 [2024-11-15 12:28:55.783537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.652 #71 NEW cov: 12483 ft: 14852 corp: 22/2911b lim: 320 exec/s: 71 rss: 75Mb L: 122/240 MS: 1 ChangeByte- 00:07:15.652 [2024-11-15 12:28:55.823694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:edededed cdw10:edededed cdw11:edededed SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.652 [2024-11-15 12:28:55.823720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.652 #72 NEW cov: 12483 ft: 14861 corp: 23/3026b lim: 320 exec/s: 72 rss: 75Mb L: 115/240 MS: 1 CopyPart- 00:07:15.652 [2024-11-15 12:28:55.883962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d8) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:07:15.652 [2024-11-15 12:28:55.883987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.652 [2024-11-15 12:28:55.884060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff 00:07:15.652 [2024-11-15 12:28:55.884074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.652 #73 NEW cov: 12484 ft: 14997 corp: 24/3166b lim: 320 exec/s: 73 rss: 75Mb L: 140/240 MS: 1 InsertRepeatedBytes- 00:07:15.652 [2024-11-15 12:28:55.943985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d8) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:07:15.652 [2024-11-15 12:28:55.944011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.652 #74 NEW cov: 12484 ft: 14998 corp: 25/3288b lim: 320 exec/s: 74 rss: 75Mb L: 122/240 MS: 1 ChangeBit- 00:07:15.652 [2024-11-15 12:28:55.984334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d8) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:07:15.652 [2024-11-15 12:28:55.984359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.652 [2024-11-15 12:28:55.984437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xfffffffffffff6ff 00:07:15.652 [2024-11-15 12:28:55.984451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.652 [2024-11-15 12:28:55.984512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:15.652 [2024-11-15 12:28:55.984526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.911 #75 NEW cov: 12484 ft: 15021 corp: 26/3527b lim: 320 exec/s: 75 rss: 75Mb L: 239/240 MS: 1 CrossOver- 00:07:15.911 [2024-11-15 12:28:56.044351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (dd) qid:0 cid:4 nsid:dddddddd cdw10:dddddddd cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.911 [2024-11-15 12:28:56.044376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.911 #80 NEW cov: 12484 ft: 15060 corp: 27/3591b lim: 320 exec/s: 80 rss: 75Mb L: 64/240 MS: 5 ShuffleBytes-CopyPart-EraseBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:15.911 [2024-11-15 12:28:56.084422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (dd) qid:0 cid:4 nsid:dddddddd cdw10:dddddddd cdw11:dddddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.911 [2024-11-15 12:28:56.084449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.911 #81 NEW cov: 12484 ft: 15083 corp: 28/3655b lim: 320 exec/s: 81 rss: 75Mb L: 64/240 MS: 1 ShuffleBytes- 00:07:15.911 [2024-11-15 12:28:56.144604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d8) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:07:15.911 [2024-11-15 12:28:56.144630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.911 #82 NEW cov: 12484 ft: 15084 corp: 29/3778b lim: 320 exec/s: 82 rss: 75Mb L: 123/240 MS: 1 InsertByte- 00:07:15.911 [2024-11-15 12:28:56.184705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:dddddddd cdw11:dddddddd 00:07:15.911 [2024-11-15 12:28:56.184730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.911 #87 NEW cov: 12484 ft: 15121 corp: 30/3894b lim: 320 exec/s: 87 rss: 75Mb L: 116/240 MS: 5 CopyPart-InsertRepeatedBytes-ChangeBit-InsertByte-CrossOver- 00:07:15.911 [2024-11-15 12:28:56.225000] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:15.911 [2024-11-15 12:28:56.225026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.911 [2024-11-15 12:28:56.225106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xfffbffffffffffff 00:07:15.911 [2024-11-15 12:28:56.225121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.911 [2024-11-15 12:28:56.225182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:15.911 [2024-11-15 12:28:56.225196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.911 #88 NEW cov: 12484 ft: 15131 corp: 31/4123b lim: 320 exec/s: 88 rss: 75Mb L: 229/240 MS: 1 ChangeBinInt- 00:07:16.170 [2024-11-15 12:28:56.265013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d8) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:07:16.170 [2024-11-15 12:28:56.265040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.170 #89 NEW cov: 12484 ft: 15142 corp: 32/4245b lim: 320 exec/s: 89 rss: 75Mb L: 122/240 MS: 1 ChangeBit- 00:07:16.170 [2024-11-15 12:28:56.325125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:edededed cdw10:edededed cdw11:edededed SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.170 [2024-11-15 12:28:56.325151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.170 #90 NEW cov: 12484 ft: 15169 corp: 33/4360b lim: 320 exec/s: 90 rss: 75Mb L: 115/240 MS: 1 ShuffleBytes- 00:07:16.170 [2024-11-15 12:28:56.385248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:edededed cdw10:edededed cdw11:edededed SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.170 [2024-11-15 12:28:56.385274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.170 #91 NEW cov: 12484 ft: 15198 corp: 34/4475b lim: 320 exec/s: 91 rss: 75Mb L: 115/240 MS: 1 ChangeBinInt- 00:07:16.170 [2024-11-15 12:28:56.425301] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:16.170 [2024-11-15 12:28:56.425350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.170 #92 NEW cov: 12484 ft: 15213 corp: 35/4600b lim: 320 exec/s: 92 rss: 75Mb L: 125/240 MS: 1 ChangeByte- 00:07:16.170 [2024-11-15 12:28:56.485725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:87878787 cdw10:87878787 cdw11:87878787 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.170 [2024-11-15 12:28:56.485754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.170 [2024-11-15 12:28:56.485819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (87) qid:0 cid:5 nsid:87878787 cdw10:87878787 cdw11:87878787 SGL TRANSPORT DATA BLOCK TRANSPORT 0x8787878787878787 00:07:16.170 [2024-11-15 12:28:56.485834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.170 [2024-11-15 12:28:56.485897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (87) qid:0 cid:6 nsid:edededed cdw10:edededed cdw11:edededed SGL TRANSPORT DATA BLOCK TRANSPORT 0xedededededededed 00:07:16.170 [2024-11-15 12:28:56.485911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.429 #93 NEW cov: 12484 ft: 15219 corp: 36/4840b lim: 320 exec/s: 46 rss: 75Mb L: 240/240 MS: 1 InsertRepeatedBytes- 00:07:16.429 #93 DONE cov: 12484 ft: 15219 corp: 36/4840b lim: 320 exec/s: 46 rss: 75Mb 00:07:16.429 Done 93 runs in 2 second(s) 00:07:16.429 12:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:07:16.429 12:28:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:16.429 12:28:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:16.429 12:28:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:16.429 12:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:07:16.429 12:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:16.429 12:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:16.429 12:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:16.429 12:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:07:16.429 12:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:16.429 12:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:16.429 12:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:07:16.429 12:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:07:16.429 12:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:16.429 12:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:07:16.429 12:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:16.429 12:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:16.429 12:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:16.429 12:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:07:16.429 [2024-11-15 12:28:56.694511] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:16.430 [2024-11-15 12:28:56.694601] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670304 ] 00:07:16.688 [2024-11-15 12:28:57.027889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.946 [2024-11-15 12:28:57.087747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.946 [2024-11-15 12:28:57.146946] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.946 [2024-11-15 12:28:57.163177] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:07:16.946 INFO: Running with entropic power schedule (0xFF, 100). 00:07:16.946 INFO: Seed: 2728950272 00:07:16.946 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:07:16.946 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:07:16.947 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:16.947 INFO: A corpus is not provided, starting from an empty corpus 00:07:16.947 #2 INITED exec/s: 0 rss: 66Mb 00:07:16.947 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:16.947 This may also happen if the target rejected all inputs we tried so far 00:07:16.947 [2024-11-15 12:28:57.218470] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:16.947 [2024-11-15 12:28:57.218589] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:16.947 [2024-11-15 12:28:57.218699] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:16.947 [2024-11-15 12:28:57.218915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.947 [2024-11-15 12:28:57.218945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.947 [2024-11-15 12:28:57.219001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.947 [2024-11-15 12:28:57.219015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.947 [2024-11-15 12:28:57.219069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.947 [2024-11-15 12:28:57.219083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.205 NEW_FUNC[1/716]: 0x43c4c8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:07:17.205 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:17.205 #4 NEW cov: 12289 ft: 12288 corp: 2/23b lim: 30 exec/s: 0 rss: 74Mb L: 22/22 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:17.465 [2024-11-15 12:28:57.549461] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.549602] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.549718] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.549830] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.550047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.550081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.465 [2024-11-15 12:28:57.550142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.550157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.465 [2024-11-15 12:28:57.550215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.550229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.465 [2024-11-15 12:28:57.550287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.550302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.465 #5 NEW cov: 12402 ft: 13402 corp: 3/49b lim: 30 exec/s: 0 rss: 74Mb L: 26/26 MS: 1 CopyPart- 00:07:17.465 [2024-11-15 12:28:57.609567] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.609698] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.609813] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (246724) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.609924] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.610036] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.610263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.610290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.465 [2024-11-15 12:28:57.610379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.610395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.465 [2024-11-15 12:28:57.610454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:f0f000f0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.610468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.465 [2024-11-15 12:28:57.610527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.610542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.465 [2024-11-15 12:28:57.610599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.610613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:17.465 #6 NEW cov: 12408 ft: 13774 corp: 4/79b lim: 30 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:17.465 [2024-11-15 12:28:57.669707] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.669836] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.669950] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (246724) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.670065] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (651764) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.670179] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.670423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.670450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.465 [2024-11-15 12:28:57.670507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.670522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.465 [2024-11-15 12:28:57.670582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:f0f000f0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.670596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.465 [2024-11-15 12:28:57.670651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c7c027c cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.670665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.465 [2024-11-15 12:28:57.670723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.670737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:17.465 #7 NEW cov: 12493 ft: 14021 corp: 5/109b lim: 30 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 CrossOver- 00:07:17.465 [2024-11-15 12:28:57.729846] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.729969] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.730088] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.730197] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127020) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.730425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.730450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.465 [2024-11-15 12:28:57.730511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.730526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.465 [2024-11-15 12:28:57.730581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.730594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.465 [2024-11-15 12:28:57.730650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c0a007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.730663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.465 #13 NEW cov: 12493 ft: 14170 corp: 6/135b lim: 30 exec/s: 0 rss: 74Mb L: 26/30 MS: 1 CrossOver- 00:07:17.465 [2024-11-15 12:28:57.769913] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.770038] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.770148] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:17.465 [2024-11-15 12:28:57.770261] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:17.465 [2024-11-15 12:28:57.770484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.770510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.465 [2024-11-15 12:28:57.770569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.770587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.465 [2024-11-15 12:28:57.770643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7c7c83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.770657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.465 [2024-11-15 12:28:57.770713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.465 [2024-11-15 12:28:57.770727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.465 #14 NEW cov: 12499 ft: 14224 corp: 7/163b lim: 30 exec/s: 0 rss: 74Mb L: 28/30 MS: 1 InsertRepeatedBytes- 00:07:17.725 [2024-11-15 12:28:57.810058] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:17.725 [2024-11-15 12:28:57.810180] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x7c7d 00:07:17.725 [2024-11-15 12:28:57.810312] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.725 [2024-11-15 12:28:57.810524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.725 [2024-11-15 12:28:57.810551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.725 [2024-11-15 12:28:57.810611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.725 [2024-11-15 12:28:57.810626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.725 [2024-11-15 12:28:57.810686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.725 [2024-11-15 12:28:57.810700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.725 #15 NEW cov: 12499 ft: 14296 corp: 8/185b lim: 30 exec/s: 0 rss: 74Mb L: 22/30 MS: 1 ChangeBit- 00:07:17.725 [2024-11-15 12:28:57.850222] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:17.725 [2024-11-15 12:28:57.850349] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.725 [2024-11-15 12:28:57.850464] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (246724) > buf size (4096) 00:07:17.725 [2024-11-15 12:28:57.850571] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (651764) > buf size (4096) 00:07:17.725 [2024-11-15 12:28:57.850683] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.725 [2024-11-15 12:28:57.850895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.725 [2024-11-15 12:28:57.850921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.725 [2024-11-15 12:28:57.850979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.725 [2024-11-15 12:28:57.850992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.725 [2024-11-15 12:28:57.851049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:f0f000f0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.725 [2024-11-15 12:28:57.851063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.725 [2024-11-15 12:28:57.851123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c7c027c cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.725 [2024-11-15 12:28:57.851136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.725 [2024-11-15 12:28:57.851195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.725 [2024-11-15 12:28:57.851210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:17.725 #16 NEW cov: 12499 ft: 14312 corp: 9/215b lim: 30 exec/s: 0 rss: 75Mb L: 30/30 MS: 1 ChangeBit- 00:07:17.725 [2024-11-15 12:28:57.910386] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:17.725 [2024-11-15 12:28:57.910527] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.725 [2024-11-15 12:28:57.910642] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (246724) > buf size (4096) 00:07:17.725 [2024-11-15 12:28:57.910756] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.725 [2024-11-15 12:28:57.910873] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.725 [2024-11-15 12:28:57.911108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.725 [2024-11-15 12:28:57.911134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.725 [2024-11-15 12:28:57.911194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.725 [2024-11-15 12:28:57.911210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.725 [2024-11-15 12:28:57.911268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:f0f000f0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.725 [2024-11-15 12:28:57.911283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.725 [2024-11-15 12:28:57.911349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.725 [2024-11-15 12:28:57.911364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.725 [2024-11-15 12:28:57.911422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.725 [2024-11-15 12:28:57.911435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:17.725 #17 NEW cov: 12499 ft: 14333 corp: 10/245b lim: 30 exec/s: 0 rss: 75Mb L: 30/30 MS: 1 ChangeBit- 00:07:17.725 [2024-11-15 12:28:57.950445] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:17.725 [2024-11-15 12:28:57.950585] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.725 [2024-11-15 12:28:57.950703] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.725 [2024-11-15 12:28:57.950815] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127020) > buf size (4096) 00:07:17.725 [2024-11-15 12:28:57.951036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.725 [2024-11-15 12:28:57.951062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.725 [2024-11-15 12:28:57.951129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.725 [2024-11-15 12:28:57.951144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.725 [2024-11-15 12:28:57.951202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.725 [2024-11-15 12:28:57.951217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.725 [2024-11-15 12:28:57.951275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c0a007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.725 [2024-11-15 12:28:57.951288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.725 #19 NEW cov: 12499 ft: 14439 corp: 11/269b lim: 30 exec/s: 0 rss: 75Mb L: 24/30 MS: 2 CopyPart-CrossOver- 00:07:17.725 [2024-11-15 12:28:57.990597] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:17.726 [2024-11-15 12:28:57.990719] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.726 [2024-11-15 12:28:57.990833] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (246724) > buf size (4096) 00:07:17.726 [2024-11-15 12:28:57.990943] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (651764) > buf size (4096) 00:07:17.726 [2024-11-15 12:28:57.991054] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.726 [2024-11-15 12:28:57.991285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.726 [2024-11-15 12:28:57.991310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.726 [2024-11-15 12:28:57.991393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.726 [2024-11-15 12:28:57.991408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.726 [2024-11-15 12:28:57.991466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:f0f000f0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.726 [2024-11-15 12:28:57.991480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.726 [2024-11-15 12:28:57.991539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c7c027c cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.726 [2024-11-15 12:28:57.991553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.726 [2024-11-15 12:28:57.991611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.726 [2024-11-15 12:28:57.991625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:17.726 #20 NEW cov: 12499 ft: 14539 corp: 12/299b lim: 30 exec/s: 0 rss: 75Mb L: 30/30 MS: 1 ShuffleBytes- 00:07:17.726 [2024-11-15 12:28:58.030723] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:17.726 [2024-11-15 12:28:58.030847] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.726 [2024-11-15 12:28:58.030959] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (246724) > buf size (4096) 00:07:17.726 [2024-11-15 12:28:58.031074] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.726 [2024-11-15 12:28:58.031190] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.726 [2024-11-15 12:28:58.031421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.726 [2024-11-15 12:28:58.031447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.726 [2024-11-15 12:28:58.031505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.726 [2024-11-15 12:28:58.031519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.726 [2024-11-15 12:28:58.031577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:f0f000f0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.726 [2024-11-15 12:28:58.031591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.726 [2024-11-15 12:28:58.031645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.726 [2024-11-15 12:28:58.031658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.726 [2024-11-15 12:28:58.031716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.726 [2024-11-15 12:28:58.031729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:17.985 #21 NEW cov: 12499 ft: 14607 corp: 13/329b lim: 30 exec/s: 0 rss: 75Mb L: 30/30 MS: 1 ChangeBit- 00:07:17.985 [2024-11-15 12:28:58.090852] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:17.985 [2024-11-15 12:28:58.090990] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.985 [2024-11-15 12:28:58.091110] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.985 [2024-11-15 12:28:58.091226] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127020) > buf size (4096) 00:07:17.985 [2024-11-15 12:28:58.091457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.985 [2024-11-15 12:28:58.091484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.985 [2024-11-15 12:28:58.091544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.985 [2024-11-15 12:28:58.091559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.985 [2024-11-15 12:28:58.091620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.985 [2024-11-15 12:28:58.091634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.985 [2024-11-15 12:28:58.091692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c0a007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.985 [2024-11-15 12:28:58.091707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.985 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:17.985 #22 NEW cov: 12522 ft: 14683 corp: 14/353b lim: 30 exec/s: 0 rss: 75Mb L: 24/30 MS: 1 ChangeByte- 00:07:17.985 [2024-11-15 12:28:58.151053] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:17.985 [2024-11-15 12:28:58.151182] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x7cff 00:07:17.985 [2024-11-15 12:28:58.151299] ctrlr.c:2698:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (31804) > len (1028) 00:07:17.985 [2024-11-15 12:28:58.151431] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (651764) > buf size (4096) 00:07:17.985 [2024-11-15 12:28:58.151550] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.985 [2024-11-15 12:28:58.151775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.985 [2024-11-15 12:28:58.151801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.985 [2024-11-15 12:28:58.151861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.985 [2024-11-15 12:28:58.151876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.985 [2024-11-15 12:28:58.151933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.985 [2024-11-15 12:28:58.151948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.985 [2024-11-15 12:28:58.152009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c7c027c cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.985 [2024-11-15 12:28:58.152023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.985 [2024-11-15 12:28:58.152082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.985 [2024-11-15 12:28:58.152095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:17.985 #23 NEW cov: 12535 ft: 14762 corp: 15/383b lim: 30 exec/s: 0 rss: 75Mb L: 30/30 MS: 1 CMP- DE: "\377\001\000\000"- 00:07:17.985 [2024-11-15 12:28:58.211185] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:17.985 [2024-11-15 12:28:58.211307] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (913908) > buf size (4096) 00:07:17.985 [2024-11-15 12:28:58.211426] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:17.985 [2024-11-15 12:28:58.211540] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:17.985 [2024-11-15 12:28:58.211759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.985 [2024-11-15 12:28:58.211785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.985 [2024-11-15 12:28:58.211845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c837c cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.985 [2024-11-15 12:28:58.211859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.985 [2024-11-15 12:28:58.211919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7c7c83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.985 [2024-11-15 12:28:58.211933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.985 [2024-11-15 12:28:58.211993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.985 [2024-11-15 12:28:58.212006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.985 #24 NEW cov: 12535 ft: 14790 corp: 16/411b lim: 30 exec/s: 24 rss: 75Mb L: 28/30 MS: 1 ChangeByte- 00:07:17.985 [2024-11-15 12:28:58.271343] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:17.985 [2024-11-15 12:28:58.271463] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.985 [2024-11-15 12:28:58.271580] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:17.985 [2024-11-15 12:28:58.271693] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x7c32 00:07:17.985 [2024-11-15 12:28:58.271913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.985 [2024-11-15 12:28:58.271938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.985 [2024-11-15 12:28:58.271999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.985 [2024-11-15 12:28:58.272013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.985 [2024-11-15 12:28:58.272069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.985 [2024-11-15 12:28:58.272082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.985 [2024-11-15 12:28:58.272140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c7c000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.985 [2024-11-15 12:28:58.272154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.985 #25 NEW cov: 12535 ft: 14823 corp: 17/436b lim: 30 exec/s: 25 rss: 75Mb L: 25/30 MS: 1 InsertByte- 00:07:18.245 [2024-11-15 12:28:58.331499] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:18.245 [2024-11-15 12:28:58.331621] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x7c7e 00:07:18.245 [2024-11-15 12:28:58.331756] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.245 [2024-11-15 12:28:58.331986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.332012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.245 [2024-11-15 12:28:58.332072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.332086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.245 [2024-11-15 12:28:58.332148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.332162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.245 #26 NEW cov: 12535 ft: 14918 corp: 18/458b lim: 30 exec/s: 26 rss: 75Mb L: 22/30 MS: 1 ChangeBinInt- 00:07:18.245 [2024-11-15 12:28:58.391658] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:18.245 [2024-11-15 12:28:58.391782] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x7c7e 00:07:18.245 [2024-11-15 12:28:58.391918] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127988) > buf size (4096) 00:07:18.245 [2024-11-15 12:28:58.392142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.392172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.245 [2024-11-15 12:28:58.392232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.392246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.245 [2024-11-15 12:28:58.392303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7cfc007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.392320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.245 #27 NEW cov: 12535 ft: 14994 corp: 19/480b lim: 30 exec/s: 27 rss: 75Mb L: 22/30 MS: 1 ChangeBit- 00:07:18.245 [2024-11-15 12:28:58.451818] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:18.245 [2024-11-15 12:28:58.451945] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x7c41 00:07:18.245 [2024-11-15 12:28:58.452062] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.245 [2024-11-15 12:28:58.452285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.452310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.245 [2024-11-15 12:28:58.452372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.452386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.245 [2024-11-15 12:28:58.452459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.452473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.245 #28 NEW cov: 12535 ft: 15083 corp: 20/503b lim: 30 exec/s: 28 rss: 75Mb L: 23/30 MS: 1 InsertByte- 00:07:18.245 [2024-11-15 12:28:58.491934] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:18.245 [2024-11-15 12:28:58.492062] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.245 [2024-11-15 12:28:58.492181] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (246724) > buf size (4096) 00:07:18.245 [2024-11-15 12:28:58.492291] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.245 [2024-11-15 12:28:58.492537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.492563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.245 [2024-11-15 12:28:58.492623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.492637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.245 [2024-11-15 12:28:58.492695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:f0f0007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.492709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.245 [2024-11-15 12:28:58.492765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.492785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.245 #29 NEW cov: 12535 ft: 15090 corp: 21/531b lim: 30 exec/s: 29 rss: 75Mb L: 28/30 MS: 1 EraseBytes- 00:07:18.245 [2024-11-15 12:28:58.532117] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:18.245 [2024-11-15 12:28:58.532256] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.245 [2024-11-15 12:28:58.532379] ctrlr.c:2698:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (31868) > len (4) 00:07:18.245 [2024-11-15 12:28:58.532484] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.245 [2024-11-15 12:28:58.532606] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.245 [2024-11-15 12:28:58.532826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.532852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.245 [2024-11-15 12:28:58.532910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.532925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.245 [2024-11-15 12:28:58.532983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0000007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.532997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.245 [2024-11-15 12:28:58.533054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.533068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.245 [2024-11-15 12:28:58.533140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.533155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:18.245 #30 NEW cov: 12535 ft: 15111 corp: 22/561b lim: 30 exec/s: 30 rss: 75Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:18.245 [2024-11-15 12:28:58.572221] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:18.245 [2024-11-15 12:28:58.572350] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.245 [2024-11-15 12:28:58.572471] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (913908) > buf size (4096) 00:07:18.245 [2024-11-15 12:28:58.572584] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300008383 00:07:18.245 [2024-11-15 12:28:58.572805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.572831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.245 [2024-11-15 12:28:58.572889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.572904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.245 [2024-11-15 12:28:58.572961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7c7c83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.572977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.245 [2024-11-15 12:28:58.573035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00008383 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.245 [2024-11-15 12:28:58.573049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.504 #31 NEW cov: 12535 ft: 15136 corp: 23/589b lim: 30 exec/s: 31 rss: 75Mb L: 28/30 MS: 1 ChangeBinInt- 00:07:18.504 [2024-11-15 12:28:58.612350] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:18.504 [2024-11-15 12:28:58.612492] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.504 [2024-11-15 12:28:58.612604] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.504 [2024-11-15 12:28:58.612721] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (651764) > buf size (4096) 00:07:18.504 [2024-11-15 12:28:58.612938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.504 [2024-11-15 12:28:58.612965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.504 [2024-11-15 12:28:58.613026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.504 [2024-11-15 12:28:58.613041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.504 [2024-11-15 12:28:58.613101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.504 [2024-11-15 12:28:58.613115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.504 [2024-11-15 12:28:58.613171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c7c027c cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.504 [2024-11-15 12:28:58.613186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.504 #32 NEW cov: 12535 ft: 15208 corp: 24/617b lim: 30 exec/s: 32 rss: 75Mb L: 28/30 MS: 1 CrossOver- 00:07:18.504 [2024-11-15 12:28:58.672541] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:18.504 [2024-11-15 12:28:58.672681] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.504 [2024-11-15 12:28:58.672800] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127940) > buf size (4096) 00:07:18.504 [2024-11-15 12:28:58.672912] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (651764) > buf size (4096) 00:07:18.504 [2024-11-15 12:28:58.673039] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.504 [2024-11-15 12:28:58.673264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.504 [2024-11-15 12:28:58.673290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.505 [2024-11-15 12:28:58.673348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.505 [2024-11-15 12:28:58.673364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.505 [2024-11-15 12:28:58.673423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7cf000f0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.505 [2024-11-15 12:28:58.673441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.505 [2024-11-15 12:28:58.673499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c7c027c cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.505 [2024-11-15 12:28:58.673513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.505 [2024-11-15 12:28:58.673572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.505 [2024-11-15 12:28:58.673585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:18.505 #33 NEW cov: 12535 ft: 15218 corp: 25/647b lim: 30 exec/s: 33 rss: 75Mb L: 30/30 MS: 1 ShuffleBytes- 00:07:18.505 [2024-11-15 12:28:58.732680] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:18.505 [2024-11-15 12:28:58.732802] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.505 [2024-11-15 12:28:58.732915] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.505 [2024-11-15 12:28:58.733029] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (651764) > buf size (4096) 00:07:18.505 [2024-11-15 12:28:58.733158] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (651764) > buf size (4096) 00:07:18.505 [2024-11-15 12:28:58.733389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.505 [2024-11-15 12:28:58.733414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.505 [2024-11-15 12:28:58.733476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.505 [2024-11-15 12:28:58.733490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.505 [2024-11-15 12:28:58.733548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.505 [2024-11-15 12:28:58.733562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.505 [2024-11-15 12:28:58.733620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c7c027c cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.505 [2024-11-15 12:28:58.733634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.505 [2024-11-15 12:28:58.733690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:7c7c027c cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.505 [2024-11-15 12:28:58.733704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:18.505 #34 NEW cov: 12535 ft: 15227 corp: 26/677b lim: 30 exec/s: 34 rss: 76Mb L: 30/30 MS: 1 CrossOver- 00:07:18.505 [2024-11-15 12:28:58.792773] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:18.505 [2024-11-15 12:28:58.792911] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x7c7e 00:07:18.505 [2024-11-15 12:28:58.793027] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.505 [2024-11-15 12:28:58.793247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.505 [2024-11-15 12:28:58.793273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.505 [2024-11-15 12:28:58.793343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.505 [2024-11-15 12:28:58.793358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.505 [2024-11-15 12:28:58.793417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.505 [2024-11-15 12:28:58.793431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.505 #35 NEW cov: 12535 ft: 15254 corp: 27/699b lim: 30 exec/s: 35 rss: 76Mb L: 22/30 MS: 1 ShuffleBytes- 00:07:18.505 [2024-11-15 12:28:58.832982] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:18.505 [2024-11-15 12:28:58.833121] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127180) > buf size (4096) 00:07:18.505 [2024-11-15 12:28:58.833237] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (246724) > buf size (4096) 00:07:18.505 [2024-11-15 12:28:58.833361] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.505 [2024-11-15 12:28:58.833475] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.505 [2024-11-15 12:28:58.833700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.505 [2024-11-15 12:28:58.833725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.505 [2024-11-15 12:28:58.833785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c32007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.505 [2024-11-15 12:28:58.833800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.505 [2024-11-15 12:28:58.833859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:f0f000f0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.505 [2024-11-15 12:28:58.833873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.505 [2024-11-15 12:28:58.833932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.505 [2024-11-15 12:28:58.833945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.505 [2024-11-15 12:28:58.834002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.505 [2024-11-15 12:28:58.834016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:18.764 #36 NEW cov: 12535 ft: 15280 corp: 28/729b lim: 30 exec/s: 36 rss: 76Mb L: 30/30 MS: 1 CrossOver- 00:07:18.764 [2024-11-15 12:28:58.873043] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:18.764 [2024-11-15 12:28:58.873183] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.764 [2024-11-15 12:28:58.873301] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (91636) > buf size (4096) 00:07:18.764 [2024-11-15 12:28:58.873421] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127020) > buf size (4096) 00:07:18.764 [2024-11-15 12:28:58.873648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.764 [2024-11-15 12:28:58.873675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.764 [2024-11-15 12:28:58.873735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.764 [2024-11-15 12:28:58.873752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.764 [2024-11-15 12:28:58.873813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:597c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.764 [2024-11-15 12:28:58.873827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.764 [2024-11-15 12:28:58.873887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c0a007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.764 [2024-11-15 12:28:58.873901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.764 #37 NEW cov: 12535 ft: 15312 corp: 29/753b lim: 30 exec/s: 37 rss: 76Mb L: 24/30 MS: 1 ChangeByte- 00:07:18.764 [2024-11-15 12:28:58.913154] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (242164) > buf size (4096) 00:07:18.764 [2024-11-15 12:28:58.913279] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x7c7e 00:07:18.764 [2024-11-15 12:28:58.913420] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127988) > buf size (4096) 00:07:18.764 [2024-11-15 12:28:58.913635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ec7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.764 [2024-11-15 12:28:58.913662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.764 [2024-11-15 12:28:58.913724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.764 [2024-11-15 12:28:58.913739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.764 [2024-11-15 12:28:58.913798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7cfc007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.764 [2024-11-15 12:28:58.913812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.764 #38 NEW cov: 12535 ft: 15317 corp: 30/775b lim: 30 exec/s: 38 rss: 76Mb L: 22/30 MS: 1 ChangeByte- 00:07:18.764 [2024-11-15 12:28:58.973400] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:18.764 [2024-11-15 12:28:58.973529] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.764 [2024-11-15 12:28:58.973663] ctrlr.c:2698:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (31868) > len (4) 00:07:18.764 [2024-11-15 12:28:58.973780] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.764 [2024-11-15 12:28:58.973901] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.764 [2024-11-15 12:28:58.974130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.764 [2024-11-15 12:28:58.974157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.764 [2024-11-15 12:28:58.974218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.764 [2024-11-15 12:28:58.974233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.764 [2024-11-15 12:28:58.974289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0000007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.764 [2024-11-15 12:28:58.974309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.764 [2024-11-15 12:28:58.974374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.764 [2024-11-15 12:28:58.974389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.764 [2024-11-15 12:28:58.974446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.764 [2024-11-15 12:28:58.974459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:18.764 #39 NEW cov: 12535 ft: 15323 corp: 31/805b lim: 30 exec/s: 39 rss: 76Mb L: 30/30 MS: 1 ShuffleBytes- 00:07:18.764 [2024-11-15 12:28:59.033535] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:18.764 [2024-11-15 12:28:59.033664] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.764 [2024-11-15 12:28:59.033782] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:18.764 [2024-11-15 12:28:59.034011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.764 [2024-11-15 12:28:59.034036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.764 [2024-11-15 12:28:59.034091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.764 [2024-11-15 12:28:59.034105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.764 [2024-11-15 12:28:59.034162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.764 [2024-11-15 12:28:59.034175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.764 #40 NEW cov: 12535 ft: 15335 corp: 32/825b lim: 30 exec/s: 40 rss: 76Mb L: 20/30 MS: 1 EraseBytes- 00:07:18.764 [2024-11-15 12:28:59.073673] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:18.764 [2024-11-15 12:28:59.073798] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (913908) > buf size (4096) 00:07:18.764 [2024-11-15 12:28:59.073917] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:18.764 [2024-11-15 12:28:59.074033] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:18.765 [2024-11-15 12:28:59.074257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.765 [2024-11-15 12:28:59.074282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.765 [2024-11-15 12:28:59.074356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c837c cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.765 [2024-11-15 12:28:59.074372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.765 [2024-11-15 12:28:59.074428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7c7c837c cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.765 [2024-11-15 12:28:59.074442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.765 [2024-11-15 12:28:59.074497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.765 [2024-11-15 12:28:59.074514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.024 #46 NEW cov: 12535 ft: 15374 corp: 33/854b lim: 30 exec/s: 46 rss: 76Mb L: 29/30 MS: 1 InsertByte- 00:07:19.024 [2024-11-15 12:28:59.133765] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (797172) > buf size (4096) 00:07:19.024 [2024-11-15 12:28:59.133909] ctrlr.c:2698:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (31868) > len (964) 00:07:19.024 [2024-11-15 12:28:59.134032] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127020) > buf size (4096) 00:07:19.024 [2024-11-15 12:28:59.134248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c837c cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.024 [2024-11-15 12:28:59.134274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.024 [2024-11-15 12:28:59.134333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00f0007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.024 [2024-11-15 12:28:59.134348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.024 [2024-11-15 12:28:59.134404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7c0a007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.024 [2024-11-15 12:28:59.134418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.024 #47 NEW cov: 12535 ft: 15398 corp: 34/876b lim: 30 exec/s: 47 rss: 76Mb L: 22/30 MS: 1 EraseBytes- 00:07:19.024 [2024-11-15 12:28:59.193974] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10740) > buf size (4096) 00:07:19.024 [2024-11-15 12:28:59.194103] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (914432) > buf size (4096) 00:07:19.024 [2024-11-15 12:28:59.194222] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:19.024 [2024-11-15 12:28:59.194348] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:07:19.024 [2024-11-15 12:28:59.194582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.024 [2024-11-15 12:28:59.194608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.024 [2024-11-15 12:28:59.194665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7cff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.024 [2024-11-15 12:28:59.194679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.024 [2024-11-15 12:28:59.194735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.024 [2024-11-15 12:28:59.194748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.024 [2024-11-15 12:28:59.194803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.024 [2024-11-15 12:28:59.194817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.024 #48 NEW cov: 12535 ft: 15405 corp: 35/902b lim: 30 exec/s: 24 rss: 76Mb L: 26/30 MS: 1 InsertRepeatedBytes- 00:07:19.024 #48 DONE cov: 12535 ft: 15405 corp: 35/902b lim: 30 exec/s: 24 rss: 76Mb 00:07:19.024 ###### Recommended dictionary. ###### 00:07:19.024 "\377\001\000\000" # Uses: 0 00:07:19.024 ###### End of recommended dictionary. ###### 00:07:19.024 Done 48 runs in 2 second(s) 00:07:19.024 12:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:07:19.024 12:28:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:19.024 12:28:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:19.024 12:28:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:19.024 12:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:07:19.024 12:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:19.024 12:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:19.024 12:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:19.024 12:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:07:19.024 12:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:19.024 12:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:19.024 12:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:07:19.024 12:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:07:19.024 12:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:19.024 12:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:07:19.024 12:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:19.024 12:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:19.024 12:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:19.024 12:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:07:19.282 [2024-11-15 12:28:59.381123] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:19.282 [2024-11-15 12:28:59.381200] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670665 ] 00:07:19.541 [2024-11-15 12:28:59.710745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.541 [2024-11-15 12:28:59.761693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.541 [2024-11-15 12:28:59.820874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.541 [2024-11-15 12:28:59.837095] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:07:19.541 INFO: Running with entropic power schedule (0xFF, 100). 00:07:19.541 INFO: Seed: 1107989046 00:07:19.541 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:07:19.541 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:07:19.541 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:19.541 INFO: A corpus is not provided, starting from an empty corpus 00:07:19.541 #2 INITED exec/s: 0 rss: 66Mb 00:07:19.541 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:19.541 This may also happen if the target rejected all inputs we tried so far 00:07:19.800 [2024-11-15 12:28:59.892841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a5c000a cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.800 [2024-11-15 12:28:59.892871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.800 [2024-11-15 12:28:59.892932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:5c5c005c cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.800 [2024-11-15 12:28:59.892950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.800 [2024-11-15 12:28:59.893010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:5c5c005c cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.800 [2024-11-15 12:28:59.893024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.058 NEW_FUNC[1/715]: 0x43ef78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:07:20.058 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:20.058 #14 NEW cov: 12228 ft: 12227 corp: 2/26b lim: 35 exec/s: 0 rss: 73Mb L: 25/25 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:20.058 [2024-11-15 12:29:00.223505] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:20.058 [2024-11-15 12:29:00.223752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a5c000a cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.058 [2024-11-15 12:29:00.223797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.058 [2024-11-15 12:29:00.223865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:5c5c005c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.058 [2024-11-15 12:29:00.223884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.058 [2024-11-15 12:29:00.223949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:5c00195c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.058 [2024-11-15 12:29:00.223971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.058 #15 NEW cov: 12352 ft: 12843 corp: 3/51b lim: 35 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 ChangeBinInt- 00:07:20.058 [2024-11-15 12:29:00.283697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f9a3000a cdw11:5c00a3a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.058 [2024-11-15 12:29:00.283724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.058 [2024-11-15 12:29:00.283781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:5c5c005c cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.058 [2024-11-15 12:29:00.283796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.058 [2024-11-15 12:29:00.283853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:5c5c005c cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.058 [2024-11-15 12:29:00.283866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.058 #16 NEW cov: 12358 ft: 13132 corp: 4/76b lim: 35 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 ChangeBinInt- 00:07:20.058 [2024-11-15 12:29:00.323572] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:20.058 [2024-11-15 12:29:00.323789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a5c000a cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.058 [2024-11-15 12:29:00.323816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.058 [2024-11-15 12:29:00.323872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:5c5c005c cdw11:00004000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.058 [2024-11-15 12:29:00.323887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.058 [2024-11-15 12:29:00.323947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:5c00195c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.059 [2024-11-15 12:29:00.323962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.059 #17 NEW cov: 12443 ft: 13380 corp: 5/101b lim: 35 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 ChangeBit- 00:07:20.059 [2024-11-15 12:29:00.383795] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:20.059 [2024-11-15 12:29:00.384023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a5c000a cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.059 [2024-11-15 12:29:00.384050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.059 [2024-11-15 12:29:00.384136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:5c5c005c cdw11:00005c00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.059 [2024-11-15 12:29:00.384152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.059 [2024-11-15 12:29:00.384211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:5c000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.059 [2024-11-15 12:29:00.384228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.317 #18 NEW cov: 12443 ft: 13494 corp: 6/127b lim: 35 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 CrossOver- 00:07:20.317 [2024-11-15 12:29:00.423979] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:20.317 [2024-11-15 12:29:00.424209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c7c700c7 cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.317 [2024-11-15 12:29:00.424236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.317 [2024-11-15 12:29:00.424294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:5c5c000a cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.317 [2024-11-15 12:29:00.424308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.317 [2024-11-15 12:29:00.424368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:5c40005c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.317 [2024-11-15 12:29:00.424382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.317 [2024-11-15 12:29:00.424437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00190000 cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.317 [2024-11-15 12:29:00.424452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.317 #19 NEW cov: 12443 ft: 14084 corp: 7/158b lim: 35 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:07:20.317 [2024-11-15 12:29:00.484227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a5c000a cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.317 [2024-11-15 12:29:00.484255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.317 [2024-11-15 12:29:00.484310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:5c5c005c cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.317 [2024-11-15 12:29:00.484329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.317 [2024-11-15 12:29:00.484386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:5c5c005c cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.317 [2024-11-15 12:29:00.484403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.317 #20 NEW cov: 12443 ft: 14151 corp: 8/183b lim: 35 exec/s: 0 rss: 73Mb L: 25/31 MS: 1 ShuffleBytes- 00:07:20.318 [2024-11-15 12:29:00.524465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.318 [2024-11-15 12:29:00.524492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.318 [2024-11-15 12:29:00.524551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff4700ff cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.318 [2024-11-15 12:29:00.524566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.318 [2024-11-15 12:29:00.524621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:5c5c005c cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.318 [2024-11-15 12:29:00.524635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.318 [2024-11-15 12:29:00.524688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:5c5c005c cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.318 [2024-11-15 12:29:00.524702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.318 #21 NEW cov: 12443 ft: 14180 corp: 9/216b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 CMP- DE: "\377\377\377\377\377\377\377G"- 00:07:20.318 [2024-11-15 12:29:00.564287] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:20.318 [2024-11-15 12:29:00.564510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a5c000a cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.318 [2024-11-15 12:29:00.564536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.318 [2024-11-15 12:29:00.564593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:5c5c005c cdw11:00005c00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.318 [2024-11-15 12:29:00.564608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.318 [2024-11-15 12:29:00.564662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:5c000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.318 [2024-11-15 12:29:00.564677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.318 #22 NEW cov: 12443 ft: 14218 corp: 10/241b lim: 35 exec/s: 0 rss: 74Mb L: 25/33 MS: 1 EraseBytes- 00:07:20.318 [2024-11-15 12:29:00.624360] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:20.318 [2024-11-15 12:29:00.624480] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:20.318 [2024-11-15 12:29:00.624687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a5c000a cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.318 [2024-11-15 12:29:00.624714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.318 [2024-11-15 12:29:00.624775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.318 [2024-11-15 12:29:00.624791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.318 [2024-11-15 12:29:00.624848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:5c000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.318 [2024-11-15 12:29:00.624867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.318 #23 NEW cov: 12443 ft: 14328 corp: 11/267b lim: 35 exec/s: 0 rss: 74Mb L: 26/33 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:20.577 [2024-11-15 12:29:00.664742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a5c000a cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.577 [2024-11-15 12:29:00.664768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.577 [2024-11-15 12:29:00.664823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:5c5c005c cdw11:00004000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.577 [2024-11-15 12:29:00.664838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.577 [2024-11-15 12:29:00.664894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.577 [2024-11-15 12:29:00.664907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.577 #24 NEW cov: 12443 ft: 14355 corp: 12/292b lim: 35 exec/s: 0 rss: 74Mb L: 25/33 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377G"- 00:07:20.577 [2024-11-15 12:29:00.704985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f9a3000a cdw11:5c00a3a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.577 [2024-11-15 12:29:00.705010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.577 [2024-11-15 12:29:00.705084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:5c5c005c cdw11:fe005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.577 [2024-11-15 12:29:00.705099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.577 [2024-11-15 12:29:00.705154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:fefe00fe cdw11:5c00fefe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.577 [2024-11-15 12:29:00.705168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.577 [2024-11-15 12:29:00.705221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:5c5c005c cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.577 [2024-11-15 12:29:00.705235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.577 #25 NEW cov: 12443 ft: 14450 corp: 13/324b lim: 35 exec/s: 0 rss: 74Mb L: 32/33 MS: 1 InsertRepeatedBytes- 00:07:20.577 [2024-11-15 12:29:00.765144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f9a3000a cdw11:5c00a3a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.577 [2024-11-15 12:29:00.765168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.577 [2024-11-15 12:29:00.765225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:5c5c005c cdw11:fe005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.577 [2024-11-15 12:29:00.765239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.577 [2024-11-15 12:29:00.765307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:070100fe cdw11:5c00fefe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.577 [2024-11-15 12:29:00.765326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.577 [2024-11-15 12:29:00.765383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:5c5c005c cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.577 [2024-11-15 12:29:00.765397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.577 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:20.577 #26 NEW cov: 12466 ft: 14496 corp: 14/356b lim: 35 exec/s: 0 rss: 74Mb L: 32/33 MS: 1 ChangeBinInt- 00:07:20.577 [2024-11-15 12:29:00.825152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:5c5c000a cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.577 [2024-11-15 12:29:00.825178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.577 [2024-11-15 12:29:00.825234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:5c5c000a cdw11:00004000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.577 [2024-11-15 12:29:00.825248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.577 [2024-11-15 12:29:00.825305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.577 [2024-11-15 12:29:00.825322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.577 #27 NEW cov: 12466 ft: 14511 corp: 15/381b lim: 35 exec/s: 0 rss: 74Mb L: 25/33 MS: 1 ShuffleBytes- 00:07:20.577 [2024-11-15 12:29:00.885326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:5c5c000a cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.577 [2024-11-15 12:29:00.885353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.577 [2024-11-15 12:29:00.885426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:005c000a cdw11:00005c40 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.577 [2024-11-15 12:29:00.885440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.577 [2024-11-15 12:29:00.885506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.577 [2024-11-15 12:29:00.885519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.836 #28 NEW cov: 12466 ft: 14513 corp: 16/406b lim: 35 exec/s: 28 rss: 74Mb L: 25/33 MS: 1 ShuffleBytes- 00:07:20.836 [2024-11-15 12:29:00.945538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:5c5c000a cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.836 [2024-11-15 12:29:00.945564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.836 [2024-11-15 12:29:00.945636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:005c000a cdw11:00005c40 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.836 [2024-11-15 12:29:00.945651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.836 [2024-11-15 12:29:00.945706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.836 [2024-11-15 12:29:00.945720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.836 #29 NEW cov: 12466 ft: 14569 corp: 17/431b lim: 35 exec/s: 29 rss: 74Mb L: 25/33 MS: 1 CopyPart- 00:07:20.836 [2024-11-15 12:29:01.005385] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:20.836 [2024-11-15 12:29:01.005515] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:20.836 [2024-11-15 12:29:01.005733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a5c000a cdw11:00005c00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.836 [2024-11-15 12:29:01.005760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.836 [2024-11-15 12:29:01.005817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.836 [2024-11-15 12:29:01.005834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.836 [2024-11-15 12:29:01.005891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:5c000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.836 [2024-11-15 12:29:01.005906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.836 #30 NEW cov: 12466 ft: 14618 corp: 18/456b lim: 35 exec/s: 30 rss: 74Mb L: 25/33 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:20.836 [2024-11-15 12:29:01.065652] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:20.836 [2024-11-15 12:29:01.065888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a5c000a cdw11:5c004c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.836 [2024-11-15 12:29:01.065913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.836 [2024-11-15 12:29:01.065970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:5c5c005c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.836 [2024-11-15 12:29:01.065984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.836 [2024-11-15 12:29:01.066040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:5c00195c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.836 [2024-11-15 12:29:01.066056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.836 #31 NEW cov: 12466 ft: 14663 corp: 19/481b lim: 35 exec/s: 31 rss: 74Mb L: 25/33 MS: 1 ChangeBit- 00:07:20.836 [2024-11-15 12:29:01.105673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.836 [2024-11-15 12:29:01.105699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.836 #33 NEW cov: 12466 ft: 15054 corp: 20/490b lim: 35 exec/s: 33 rss: 74Mb L: 9/33 MS: 2 CopyPart-PersAutoDict- DE: "\377\377\377\377\377\377\377G"- 00:07:20.836 [2024-11-15 12:29:01.145815] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:20.836 [2024-11-15 12:29:01.146043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a5c000a cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.836 [2024-11-15 12:29:01.146068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.836 [2024-11-15 12:29:01.146126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:5c5c005c cdw11:0d000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.836 [2024-11-15 12:29:01.146141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.836 [2024-11-15 12:29:01.146197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:5c00195c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.836 [2024-11-15 12:29:01.146212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.836 #34 NEW cov: 12466 ft: 15058 corp: 21/515b lim: 35 exec/s: 34 rss: 74Mb L: 25/33 MS: 1 CMP- DE: "\015\000\000\000"- 00:07:21.095 [2024-11-15 12:29:01.185927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a5c000a cdw11:5c000af9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.095 [2024-11-15 12:29:01.185953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.095 #35 NEW cov: 12466 ft: 15126 corp: 22/522b lim: 35 exec/s: 35 rss: 74Mb L: 7/33 MS: 1 CrossOver- 00:07:21.095 [2024-11-15 12:29:01.226403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.095 [2024-11-15 12:29:01.226436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.095 [2024-11-15 12:29:01.226492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff4700ff cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.095 [2024-11-15 12:29:01.226505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.095 [2024-11-15 12:29:01.226561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:5c5c005c cdw11:5c005c0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.095 [2024-11-15 12:29:01.226575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.095 [2024-11-15 12:29:01.226628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:5c5c005c cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.095 [2024-11-15 12:29:01.226641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.095 #36 NEW cov: 12466 ft: 15213 corp: 23/556b lim: 35 exec/s: 36 rss: 74Mb L: 34/34 MS: 1 CrossOver- 00:07:21.095 [2024-11-15 12:29:01.286564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.095 [2024-11-15 12:29:01.286588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.095 [2024-11-15 12:29:01.286644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff4700ff cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.095 [2024-11-15 12:29:01.286658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.095 [2024-11-15 12:29:01.286713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:5c5c005c cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.096 [2024-11-15 12:29:01.286727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.096 [2024-11-15 12:29:01.286780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:5c5c005c cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.096 [2024-11-15 12:29:01.286794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.096 #37 NEW cov: 12466 ft: 15218 corp: 24/589b lim: 35 exec/s: 37 rss: 74Mb L: 33/34 MS: 1 ShuffleBytes- 00:07:21.096 [2024-11-15 12:29:01.326583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a5c000a cdw11:5c00235c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.096 [2024-11-15 12:29:01.326609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.096 [2024-11-15 12:29:01.326665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:5c5c005c cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.096 [2024-11-15 12:29:01.326679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.096 [2024-11-15 12:29:01.326737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:5c5c005c cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.096 [2024-11-15 12:29:01.326750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.096 #38 NEW cov: 12466 ft: 15245 corp: 25/615b lim: 35 exec/s: 38 rss: 74Mb L: 26/34 MS: 1 InsertByte- 00:07:21.096 [2024-11-15 12:29:01.386424] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:21.096 [2024-11-15 12:29:01.386556] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:21.096 [2024-11-15 12:29:01.386773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a5c000a cdw11:00005c00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.096 [2024-11-15 12:29:01.386798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.096 [2024-11-15 12:29:01.386854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.096 [2024-11-15 12:29:01.386870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.096 [2024-11-15 12:29:01.386926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:19000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.096 [2024-11-15 12:29:01.386941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.096 #39 NEW cov: 12466 ft: 15256 corp: 26/641b lim: 35 exec/s: 39 rss: 75Mb L: 26/34 MS: 1 InsertByte- 00:07:21.354 [2024-11-15 12:29:01.446685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:35ff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.354 [2024-11-15 12:29:01.446712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.354 #40 NEW cov: 12466 ft: 15261 corp: 27/651b lim: 35 exec/s: 40 rss: 75Mb L: 10/34 MS: 1 InsertByte- 00:07:21.354 [2024-11-15 12:29:01.507184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f9a3000a cdw11:5c00a3a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.354 [2024-11-15 12:29:01.507210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.354 [2024-11-15 12:29:01.507283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:5c5c005c cdw11:fe005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.354 [2024-11-15 12:29:01.507298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.354 [2024-11-15 12:29:01.507354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:070100fe cdw11:5c00fefe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.354 [2024-11-15 12:29:01.507369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.354 [2024-11-15 12:29:01.507423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:5c5c005c cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.354 [2024-11-15 12:29:01.507436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.354 #41 NEW cov: 12466 ft: 15274 corp: 28/683b lim: 35 exec/s: 41 rss: 75Mb L: 32/34 MS: 1 ShuffleBytes- 00:07:21.354 [2024-11-15 12:29:01.567345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.354 [2024-11-15 12:29:01.567370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.354 [2024-11-15 12:29:01.567429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff4700ff cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.354 [2024-11-15 12:29:01.567442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.354 [2024-11-15 12:29:01.567496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:5c5c005c cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.354 [2024-11-15 12:29:01.567510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.354 [2024-11-15 12:29:01.567564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:5c5c005c cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.354 [2024-11-15 12:29:01.567578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.354 #42 NEW cov: 12466 ft: 15305 corp: 29/717b lim: 35 exec/s: 42 rss: 75Mb L: 34/34 MS: 1 CrossOver- 00:07:21.354 [2024-11-15 12:29:01.607394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a5c000a cdw11:2b000a2b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.354 [2024-11-15 12:29:01.607419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.354 [2024-11-15 12:29:01.607474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2b2b002b cdw11:2b002b2b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.354 [2024-11-15 12:29:01.607487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.354 [2024-11-15 12:29:01.607559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2b2b002b cdw11:5c002bf9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.354 [2024-11-15 12:29:01.607573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.354 #43 NEW cov: 12466 ft: 15339 corp: 30/738b lim: 35 exec/s: 43 rss: 75Mb L: 21/34 MS: 1 InsertRepeatedBytes- 00:07:21.354 [2024-11-15 12:29:01.667493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f9a3000a cdw11:5c00a3a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.354 [2024-11-15 12:29:01.667520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.354 [2024-11-15 12:29:01.667592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:5c5c005c cdw11:fe005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.354 [2024-11-15 12:29:01.667607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.354 [2024-11-15 12:29:01.667662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:070100fe cdw11:5c00fefe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.354 [2024-11-15 12:29:01.667676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.354 #44 NEW cov: 12466 ft: 15351 corp: 31/759b lim: 35 exec/s: 44 rss: 75Mb L: 21/34 MS: 1 EraseBytes- 00:07:21.613 [2024-11-15 12:29:01.707403] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:21.613 [2024-11-15 12:29:01.707537] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:21.613 [2024-11-15 12:29:01.707743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a5c000a cdw11:5c005c5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.613 [2024-11-15 12:29:01.707768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.613 [2024-11-15 12:29:01.707828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.613 [2024-11-15 12:29:01.707848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.613 [2024-11-15 12:29:01.707904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:04000000 cdw11:5c000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.613 [2024-11-15 12:29:01.707920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.613 #45 NEW cov: 12466 ft: 15415 corp: 32/785b lim: 35 exec/s: 45 rss: 75Mb L: 26/34 MS: 1 ChangeBit- 00:07:21.613 [2024-11-15 12:29:01.767623] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:21.613 [2024-11-15 12:29:01.767845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a5c000a cdw11:00005c00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.613 [2024-11-15 12:29:01.767870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.613 [2024-11-15 12:29:01.767929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.613 [2024-11-15 12:29:01.767942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.613 [2024-11-15 12:29:01.767997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:5c000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.613 [2024-11-15 12:29:01.768013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.613 #46 NEW cov: 12466 ft: 15421 corp: 33/810b lim: 35 exec/s: 46 rss: 75Mb L: 25/34 MS: 1 ChangeByte- 00:07:21.613 [2024-11-15 12:29:01.807599] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:21.614 [2024-11-15 12:29:01.807900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a5c000a cdw11:00005c00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.614 [2024-11-15 12:29:01.807926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.614 [2024-11-15 12:29:01.807987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.614 [2024-11-15 12:29:01.808003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.614 [2024-11-15 12:29:01.808059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2b2b002b cdw11:2b002b2b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.614 [2024-11-15 12:29:01.808072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.614 #47 NEW cov: 12466 ft: 15454 corp: 34/836b lim: 35 exec/s: 47 rss: 75Mb L: 26/34 MS: 1 CrossOver- 00:07:21.614 [2024-11-15 12:29:01.867809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a5c000a cdw11:0a005c0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.614 [2024-11-15 12:29:01.867836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.614 #48 NEW cov: 12466 ft: 15462 corp: 35/848b lim: 35 exec/s: 24 rss: 75Mb L: 12/34 MS: 1 CrossOver- 00:07:21.614 #48 DONE cov: 12466 ft: 15462 corp: 35/848b lim: 35 exec/s: 24 rss: 75Mb 00:07:21.614 ###### Recommended dictionary. ###### 00:07:21.614 "\377\377\377\377\377\377\377G" # Uses: 2 00:07:21.614 "\000\000\000\000\000\000\000\000" # Uses: 1 00:07:21.614 "\015\000\000\000" # Uses: 0 00:07:21.614 ###### End of recommended dictionary. ###### 00:07:21.614 Done 48 runs in 2 second(s) 00:07:21.873 12:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:07:21.873 12:29:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:21.873 12:29:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:21.873 12:29:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:21.873 12:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:07:21.873 12:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:21.873 12:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:21.873 12:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:21.873 12:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:07:21.873 12:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:21.873 12:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:21.873 12:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:07:21.873 12:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:07:21.873 12:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:21.873 12:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:07:21.873 12:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:21.873 12:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:21.873 12:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:21.873 12:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:07:21.873 [2024-11-15 12:29:02.056265] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:21.873 [2024-11-15 12:29:02.056352] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671019 ] 00:07:22.132 [2024-11-15 12:29:02.382754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.132 [2024-11-15 12:29:02.440514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.389 [2024-11-15 12:29:02.500073] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.389 [2024-11-15 12:29:02.516302] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:07:22.389 INFO: Running with entropic power schedule (0xFF, 100). 00:07:22.389 INFO: Seed: 3787989196 00:07:22.389 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:07:22.389 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:07:22.389 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:22.389 INFO: A corpus is not provided, starting from an empty corpus 00:07:22.390 #2 INITED exec/s: 0 rss: 66Mb 00:07:22.390 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:22.390 This may also happen if the target rejected all inputs we tried so far 00:07:22.648 NEW_FUNC[1/708]: 0x440c58 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:07:22.648 NEW_FUNC[2/708]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:22.648 #11 NEW cov: 12199 ft: 12187 corp: 2/6b lim: 20 exec/s: 0 rss: 73Mb L: 5/5 MS: 4 InsertByte-ChangeBinInt-ChangeByte-InsertRepeatedBytes- 00:07:22.648 [2024-11-15 12:29:02.902970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.648 [2024-11-15 12:29:02.903016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.648 NEW_FUNC[1/16]: 0x158fec8 in _nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3649 00:07:22.648 NEW_FUNC[2/16]: 0x185d628 in nvme_ctrlr_queue_async_event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3300 00:07:22.648 #19 NEW cov: 12575 ft: 13635 corp: 3/21b lim: 20 exec/s: 0 rss: 74Mb L: 15/15 MS: 3 ChangeBinInt-ChangeBit-InsertRepeatedBytes- 00:07:22.648 [2024-11-15 12:29:02.953279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.648 [2024-11-15 12:29:02.953312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.648 [2024-11-15 12:29:02.953559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.648 [2024-11-15 12:29:02.953576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:07:22.906 #20 NEW cov: 12598 ft: 14663 corp: 4/41b lim: 20 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:07:22.906 [2024-11-15 12:29:03.013278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.906 [2024-11-15 12:29:03.013307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.906 #21 NEW cov: 12690 ft: 15017 corp: 5/60b lim: 20 exec/s: 0 rss: 74Mb L: 19/20 MS: 1 CMP- DE: "\377\377\377\014"- 00:07:22.906 [2024-11-15 12:29:03.063640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.906 [2024-11-15 12:29:03.063668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.906 [2024-11-15 12:29:03.063916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.906 [2024-11-15 12:29:03.063933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:07:22.906 #22 NEW cov: 12690 ft: 15088 corp: 6/80b lim: 20 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 ChangeByte- 00:07:22.906 [2024-11-15 12:29:03.123752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.906 [2024-11-15 12:29:03.123779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.906 [2024-11-15 12:29:03.124028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.906 [2024-11-15 12:29:03.124045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:07:22.906 #23 NEW cov: 12690 ft: 15120 corp: 7/100b lim: 20 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 CopyPart- 00:07:22.906 [2024-11-15 12:29:03.163847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.906 [2024-11-15 12:29:03.163874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.906 [2024-11-15 12:29:03.164132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.906 [2024-11-15 12:29:03.164149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:07:22.906 #24 NEW cov: 12690 ft: 15181 corp: 8/120b lim: 20 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 ShuffleBytes- 00:07:22.906 [2024-11-15 12:29:03.224113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.906 [2024-11-15 12:29:03.224142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.906 [2024-11-15 12:29:03.224270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.906 [2024-11-15 12:29:03.224287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:07:23.164 #25 NEW cov: 12690 ft: 15240 corp: 9/140b lim: 20 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 InsertByte- 00:07:23.164 [2024-11-15 12:29:03.283734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.164 [2024-11-15 12:29:03.283762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.164 #26 NEW cov: 12691 ft: 15554 corp: 10/151b lim: 20 exec/s: 0 rss: 74Mb L: 11/20 MS: 1 EraseBytes- 00:07:23.164 [2024-11-15 12:29:03.324305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.164 [2024-11-15 12:29:03.324338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.164 [2024-11-15 12:29:03.324588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.164 [2024-11-15 12:29:03.324605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:07:23.164 #27 NEW cov: 12691 ft: 15598 corp: 11/171b lim: 20 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 ShuffleBytes- 00:07:23.164 [2024-11-15 12:29:03.364427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.164 [2024-11-15 12:29:03.364456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.164 [2024-11-15 12:29:03.364659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.164 [2024-11-15 12:29:03.364676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:07:23.164 #28 NEW cov: 12691 ft: 15631 corp: 12/191b lim: 20 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 PersAutoDict- DE: "\377\377\377\014"- 00:07:23.164 [2024-11-15 12:29:03.424472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.164 [2024-11-15 12:29:03.424506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.164 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:23.164 #34 NEW cov: 12714 ft: 15689 corp: 13/208b lim: 20 exec/s: 0 rss: 74Mb L: 17/20 MS: 1 EraseBytes- 00:07:23.422 #35 NEW cov: 12714 ft: 15826 corp: 14/221b lim: 20 exec/s: 0 rss: 74Mb L: 13/20 MS: 1 InsertRepeatedBytes- 00:07:23.422 #36 NEW cov: 12714 ft: 15885 corp: 15/235b lim: 20 exec/s: 36 rss: 74Mb L: 14/20 MS: 1 InsertByte- 00:07:23.422 [2024-11-15 12:29:03.605052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.422 [2024-11-15 12:29:03.605091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.422 [2024-11-15 12:29:03.605322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.422 [2024-11-15 12:29:03.605339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:07:23.422 #37 NEW cov: 12714 ft: 15908 corp: 16/255b lim: 20 exec/s: 37 rss: 74Mb L: 20/20 MS: 1 ChangeBinInt- 00:07:23.422 [2024-11-15 12:29:03.645229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.422 [2024-11-15 12:29:03.645257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.422 [2024-11-15 12:29:03.645295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.422 [2024-11-15 12:29:03.645308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:07:23.422 #43 NEW cov: 12714 ft: 15935 corp: 17/275b lim: 20 exec/s: 43 rss: 75Mb L: 20/20 MS: 1 CopyPart- 00:07:23.422 [2024-11-15 12:29:03.705328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.422 [2024-11-15 12:29:03.705355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.422 [2024-11-15 12:29:03.705620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.422 [2024-11-15 12:29:03.705637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:07:23.422 #44 NEW cov: 12714 ft: 15945 corp: 18/295b lim: 20 exec/s: 44 rss: 75Mb L: 20/20 MS: 1 ShuffleBytes- 00:07:23.422 [2024-11-15 12:29:03.745445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.422 [2024-11-15 12:29:03.745472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.422 [2024-11-15 12:29:03.745718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.422 [2024-11-15 12:29:03.745734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:07:23.679 #45 NEW cov: 12714 ft: 16026 corp: 19/315b lim: 20 exec/s: 45 rss: 75Mb L: 20/20 MS: 1 ChangeByte- 00:07:23.679 [2024-11-15 12:29:03.785536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.679 [2024-11-15 12:29:03.785561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.679 [2024-11-15 12:29:03.785782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.679 [2024-11-15 12:29:03.785798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:07:23.679 #46 NEW cov: 12714 ft: 16058 corp: 20/335b lim: 20 exec/s: 46 rss: 75Mb L: 20/20 MS: 1 CMP- DE: "\000\000\000\004"- 00:07:23.679 #47 NEW cov: 12714 ft: 16087 corp: 21/348b lim: 20 exec/s: 47 rss: 75Mb L: 13/20 MS: 1 CopyPart- 00:07:23.679 #48 NEW cov: 12714 ft: 16145 corp: 22/353b lim: 20 exec/s: 48 rss: 75Mb L: 5/20 MS: 1 ChangeBit- 00:07:23.679 #49 NEW cov: 12714 ft: 16183 corp: 23/371b lim: 20 exec/s: 49 rss: 75Mb L: 18/20 MS: 1 InsertRepeatedBytes- 00:07:23.679 [2024-11-15 12:29:03.965630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.679 [2024-11-15 12:29:03.965658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.679 #50 NEW cov: 12714 ft: 16221 corp: 24/382b lim: 20 exec/s: 50 rss: 75Mb L: 11/20 MS: 1 ShuffleBytes- 00:07:23.937 [2024-11-15 12:29:04.025957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.937 [2024-11-15 12:29:04.025984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.937 #51 NEW cov: 12714 ft: 16229 corp: 25/397b lim: 20 exec/s: 51 rss: 75Mb L: 15/20 MS: 1 ChangeBinInt- 00:07:23.937 [2024-11-15 12:29:04.066359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.937 [2024-11-15 12:29:04.066384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.937 [2024-11-15 12:29:04.066567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.937 [2024-11-15 12:29:04.066584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:23.937 #52 NEW cov: 12714 ft: 16260 corp: 26/417b lim: 20 exec/s: 52 rss: 75Mb L: 20/20 MS: 1 ChangeBinInt- 00:07:23.937 [2024-11-15 12:29:04.125932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.937 [2024-11-15 12:29:04.125959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.937 #53 NEW cov: 12714 ft: 16308 corp: 27/423b lim: 20 exec/s: 53 rss: 75Mb L: 6/20 MS: 1 EraseBytes- 00:07:23.937 [2024-11-15 12:29:04.166696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.937 [2024-11-15 12:29:04.166723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.937 [2024-11-15 12:29:04.166979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.937 [2024-11-15 12:29:04.166997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:07:23.937 #54 NEW cov: 12714 ft: 16323 corp: 28/443b lim: 20 exec/s: 54 rss: 75Mb L: 20/20 MS: 1 ShuffleBytes- 00:07:23.937 [2024-11-15 12:29:04.206814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.937 [2024-11-15 12:29:04.206841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.937 [2024-11-15 12:29:04.207093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.937 [2024-11-15 12:29:04.207110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:07:23.937 #55 NEW cov: 12714 ft: 16336 corp: 29/463b lim: 20 exec/s: 55 rss: 75Mb L: 20/20 MS: 1 ShuffleBytes- 00:07:23.937 [2024-11-15 12:29:04.246953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.937 [2024-11-15 12:29:04.246980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.937 [2024-11-15 12:29:04.247239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.937 [2024-11-15 12:29:04.247256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:07:23.937 #56 NEW cov: 12714 ft: 16372 corp: 30/483b lim: 20 exec/s: 56 rss: 75Mb L: 20/20 MS: 1 CopyPart- 00:07:24.195 [2024-11-15 12:29:04.286723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.195 [2024-11-15 12:29:04.286749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.195 #57 NEW cov: 12714 ft: 16425 corp: 31/497b lim: 20 exec/s: 57 rss: 75Mb L: 14/20 MS: 1 EraseBytes- 00:07:24.195 [2024-11-15 12:29:04.347241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.195 [2024-11-15 12:29:04.347272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.195 [2024-11-15 12:29:04.347508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.195 [2024-11-15 12:29:04.347525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:07:24.195 #58 NEW cov: 12714 ft: 16431 corp: 32/517b lim: 20 exec/s: 58 rss: 75Mb L: 20/20 MS: 1 CopyPart- 00:07:24.195 [2024-11-15 12:29:04.387238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.195 [2024-11-15 12:29:04.387267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.195 #59 NEW cov: 12714 ft: 16438 corp: 33/533b lim: 20 exec/s: 59 rss: 75Mb L: 16/20 MS: 1 InsertByte- 00:07:24.195 [2024-11-15 12:29:04.427272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.195 [2024-11-15 12:29:04.427301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.195 #60 NEW cov: 12714 ft: 16465 corp: 34/552b lim: 20 exec/s: 60 rss: 75Mb L: 19/20 MS: 1 CMP- DE: "\377\377\377\377\377\377\377G"- 00:07:24.195 #61 NEW cov: 12714 ft: 16528 corp: 35/566b lim: 20 exec/s: 61 rss: 75Mb L: 14/20 MS: 1 ChangeBinInt- 00:07:24.195 [2024-11-15 12:29:04.527778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.195 [2024-11-15 12:29:04.527807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.195 [2024-11-15 12:29:04.528069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.195 [2024-11-15 12:29:04.528086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:07:24.454 #62 NEW cov: 12714 ft: 16539 corp: 36/586b lim: 20 exec/s: 31 rss: 75Mb L: 20/20 MS: 1 ChangeBit- 00:07:24.454 #62 DONE cov: 12714 ft: 16539 corp: 36/586b lim: 20 exec/s: 31 rss: 75Mb 00:07:24.454 ###### Recommended dictionary. ###### 00:07:24.454 "\377\377\377\014" # Uses: 1 00:07:24.454 "\000\000\000\004" # Uses: 0 00:07:24.454 "\377\377\377\377\377\377\377G" # Uses: 0 00:07:24.454 ###### End of recommended dictionary. ###### 00:07:24.454 Done 62 runs in 2 second(s) 00:07:24.454 12:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:07:24.454 12:29:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:24.454 12:29:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:24.454 12:29:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:24.454 12:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:07:24.454 12:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:24.454 12:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:24.454 12:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:24.454 12:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:07:24.454 12:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:24.454 12:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:24.454 12:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:07:24.454 12:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:07:24.454 12:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:24.454 12:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:07:24.454 12:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:24.454 12:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:24.454 12:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:24.454 12:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:07:24.454 [2024-11-15 12:29:04.715596] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:24.454 [2024-11-15 12:29:04.715667] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671368 ] 00:07:24.712 [2024-11-15 12:29:05.038424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.970 [2024-11-15 12:29:05.099103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.970 [2024-11-15 12:29:05.158398] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.971 [2024-11-15 12:29:05.174640] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:07:24.971 INFO: Running with entropic power schedule (0xFF, 100). 00:07:24.971 INFO: Seed: 2150026429 00:07:24.971 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:07:24.971 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:07:24.971 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:24.971 INFO: A corpus is not provided, starting from an empty corpus 00:07:24.971 #2 INITED exec/s: 0 rss: 66Mb 00:07:24.971 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:24.971 This may also happen if the target rejected all inputs we tried so far 00:07:24.971 [2024-11-15 12:29:05.224142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0aff0a10 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.971 [2024-11-15 12:29:05.224174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.971 [2024-11-15 12:29:05.224243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.971 [2024-11-15 12:29:05.224258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.971 [2024-11-15 12:29:05.224310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.971 [2024-11-15 12:29:05.224334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.229 NEW_FUNC[1/716]: 0x441d58 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:07:25.229 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:25.229 #10 NEW cov: 12249 ft: 12246 corp: 2/27b lim: 35 exec/s: 0 rss: 73Mb L: 26/26 MS: 3 InsertByte-CrossOver-InsertRepeatedBytes- 00:07:25.229 [2024-11-15 12:29:05.554780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:418a0a00 cdw11:e4320000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.229 [2024-11-15 12:29:05.554818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.487 #11 NEW cov: 12362 ft: 13600 corp: 3/36b lim: 35 exec/s: 0 rss: 74Mb L: 9/26 MS: 1 CMP- DE: "\000A\212\34422\0174"- 00:07:25.487 [2024-11-15 12:29:05.594782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:418a0a00 cdw11:e4320000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.487 [2024-11-15 12:29:05.594809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.487 #12 NEW cov: 12368 ft: 13905 corp: 4/45b lim: 35 exec/s: 0 rss: 74Mb L: 9/26 MS: 1 ChangeASCIIInt- 00:07:25.488 [2024-11-15 12:29:05.654930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:418a0a00 cdw11:e4320000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.488 [2024-11-15 12:29:05.654956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.488 #13 NEW cov: 12453 ft: 14165 corp: 5/54b lim: 35 exec/s: 0 rss: 74Mb L: 9/26 MS: 1 ChangeByte- 00:07:25.488 [2024-11-15 12:29:05.695536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0aff0a10 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.488 [2024-11-15 12:29:05.695562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.488 [2024-11-15 12:29:05.695617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.488 [2024-11-15 12:29:05.695631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.488 [2024-11-15 12:29:05.695684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0e00ffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.488 [2024-11-15 12:29:05.695697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.488 [2024-11-15 12:29:05.695750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00ff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.488 [2024-11-15 12:29:05.695764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.488 #14 NEW cov: 12453 ft: 14629 corp: 6/88b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 CMP- DE: "\016\000\000\000\000\000\000\000"- 00:07:25.488 [2024-11-15 12:29:05.755214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:418a0a00 cdw11:e4000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.488 [2024-11-15 12:29:05.755242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.488 #15 NEW cov: 12453 ft: 14728 corp: 7/97b lim: 35 exec/s: 0 rss: 74Mb L: 9/34 MS: 1 CopyPart- 00:07:25.488 [2024-11-15 12:29:05.795952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0aff0a10 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.488 [2024-11-15 12:29:05.795977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.488 [2024-11-15 12:29:05.796031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.488 [2024-11-15 12:29:05.796045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.488 [2024-11-15 12:29:05.796100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0e00ffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.488 [2024-11-15 12:29:05.796113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.488 [2024-11-15 12:29:05.796167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000a00 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.488 [2024-11-15 12:29:05.796184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.488 [2024-11-15 12:29:05.796237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.488 [2024-11-15 12:29:05.796251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.746 #16 NEW cov: 12453 ft: 14893 corp: 8/132b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:07:25.746 [2024-11-15 12:29:05.855534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:418a0a00 cdw11:e4000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.746 [2024-11-15 12:29:05.855560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.746 #17 NEW cov: 12453 ft: 15013 corp: 9/141b lim: 35 exec/s: 0 rss: 74Mb L: 9/35 MS: 1 CMP- DE: "\000\000"- 00:07:25.746 [2024-11-15 12:29:05.916288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.746 [2024-11-15 12:29:05.916321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.746 [2024-11-15 12:29:05.916379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.746 [2024-11-15 12:29:05.916395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.746 [2024-11-15 12:29:05.916452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.746 [2024-11-15 12:29:05.916466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.746 [2024-11-15 12:29:05.916523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.746 [2024-11-15 12:29:05.916538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.746 #20 NEW cov: 12453 ft: 15063 corp: 10/174b lim: 35 exec/s: 0 rss: 74Mb L: 33/35 MS: 3 ChangeBit-CopyPart-InsertRepeatedBytes- 00:07:25.746 [2024-11-15 12:29:05.955879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:418a0a00 cdw11:e48a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.746 [2024-11-15 12:29:05.955907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.746 #21 NEW cov: 12453 ft: 15118 corp: 11/181b lim: 35 exec/s: 0 rss: 74Mb L: 7/35 MS: 1 EraseBytes- 00:07:25.746 [2024-11-15 12:29:05.995895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:418a0a00 cdw11:e48a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.746 [2024-11-15 12:29:05.995922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.746 #22 NEW cov: 12453 ft: 15133 corp: 12/188b lim: 35 exec/s: 0 rss: 74Mb L: 7/35 MS: 1 CopyPart- 00:07:25.746 [2024-11-15 12:29:06.056396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0aff0a10 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.746 [2024-11-15 12:29:06.056421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.746 [2024-11-15 12:29:06.056503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.746 [2024-11-15 12:29:06.056517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.746 [2024-11-15 12:29:06.056574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.746 [2024-11-15 12:29:06.056587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.746 #23 NEW cov: 12453 ft: 15159 corp: 13/215b lim: 35 exec/s: 0 rss: 74Mb L: 27/35 MS: 1 InsertByte- 00:07:26.004 [2024-11-15 12:29:06.096141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:418a0a00 cdw11:e4000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.004 [2024-11-15 12:29:06.096167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.004 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:26.004 #24 NEW cov: 12476 ft: 15211 corp: 14/224b lim: 35 exec/s: 0 rss: 74Mb L: 9/35 MS: 1 CrossOver- 00:07:26.004 [2024-11-15 12:29:06.136259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:418a0a00 cdw11:e4000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.004 [2024-11-15 12:29:06.136284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.004 #25 NEW cov: 12476 ft: 15245 corp: 15/234b lim: 35 exec/s: 0 rss: 74Mb L: 10/35 MS: 1 InsertByte- 00:07:26.004 [2024-11-15 12:29:06.196428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:09000a00 cdw11:e4000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.004 [2024-11-15 12:29:06.196453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.004 #26 NEW cov: 12476 ft: 15273 corp: 16/243b lim: 35 exec/s: 26 rss: 74Mb L: 9/35 MS: 1 ChangeBinInt- 00:07:26.004 [2024-11-15 12:29:06.236535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:418a0a00 cdw11:e4000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.004 [2024-11-15 12:29:06.236559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.004 #27 NEW cov: 12476 ft: 15306 corp: 17/252b lim: 35 exec/s: 27 rss: 74Mb L: 9/35 MS: 1 CopyPart- 00:07:26.004 [2024-11-15 12:29:06.277146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:45410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.004 [2024-11-15 12:29:06.277171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.004 [2024-11-15 12:29:06.277226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.004 [2024-11-15 12:29:06.277240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.004 [2024-11-15 12:29:06.277294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.004 [2024-11-15 12:29:06.277308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.004 [2024-11-15 12:29:06.277382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.004 [2024-11-15 12:29:06.277407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.004 #28 NEW cov: 12476 ft: 15325 corp: 18/285b lim: 35 exec/s: 28 rss: 74Mb L: 33/35 MS: 1 ChangeBinInt- 00:07:26.004 [2024-11-15 12:29:06.336790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:418a0a00 cdw11:e4320000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.004 [2024-11-15 12:29:06.336818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.262 #29 NEW cov: 12476 ft: 15358 corp: 19/294b lim: 35 exec/s: 29 rss: 74Mb L: 9/35 MS: 1 ChangeASCIIInt- 00:07:26.262 [2024-11-15 12:29:06.397117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0a000a00 cdw11:09000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.262 [2024-11-15 12:29:06.397142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.262 [2024-11-15 12:29:06.397215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00410041 cdw11:8ae40000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.262 [2024-11-15 12:29:06.397229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.262 #30 NEW cov: 12476 ft: 15570 corp: 20/312b lim: 35 exec/s: 30 rss: 74Mb L: 18/35 MS: 1 CrossOver- 00:07:26.262 [2024-11-15 12:29:06.457626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:418a0a00 cdw11:e4000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.262 [2024-11-15 12:29:06.457650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.262 [2024-11-15 12:29:06.457721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00008a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.262 [2024-11-15 12:29:06.457735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.262 [2024-11-15 12:29:06.457791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.262 [2024-11-15 12:29:06.457804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.262 [2024-11-15 12:29:06.457859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.262 [2024-11-15 12:29:06.457873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.262 #31 NEW cov: 12476 ft: 15577 corp: 21/346b lim: 35 exec/s: 31 rss: 74Mb L: 34/35 MS: 1 InsertRepeatedBytes- 00:07:26.262 [2024-11-15 12:29:06.497775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.262 [2024-11-15 12:29:06.497800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.262 [2024-11-15 12:29:06.497872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.262 [2024-11-15 12:29:06.497886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.262 [2024-11-15 12:29:06.497942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.262 [2024-11-15 12:29:06.497956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.262 [2024-11-15 12:29:06.498009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:41414141 cdw11:00410001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.262 [2024-11-15 12:29:06.498023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.262 #32 NEW cov: 12476 ft: 15609 corp: 22/379b lim: 35 exec/s: 32 rss: 74Mb L: 33/35 MS: 1 PersAutoDict- DE: "\000A\212\34422\0174"- 00:07:26.262 [2024-11-15 12:29:06.537377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:418a0a00 cdw11:e4320000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.262 [2024-11-15 12:29:06.537407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.262 #33 NEW cov: 12476 ft: 15698 corp: 23/389b lim: 35 exec/s: 33 rss: 74Mb L: 10/35 MS: 1 InsertByte- 00:07:26.262 [2024-11-15 12:29:06.577675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:418a0a00 cdw11:e40a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.262 [2024-11-15 12:29:06.577702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.262 [2024-11-15 12:29:06.577774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e400418a cdw11:418a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.262 [2024-11-15 12:29:06.577789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.520 #34 NEW cov: 12476 ft: 15754 corp: 24/406b lim: 35 exec/s: 34 rss: 75Mb L: 17/35 MS: 1 CopyPart- 00:07:26.520 [2024-11-15 12:29:06.638170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.521 [2024-11-15 12:29:06.638196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.521 [2024-11-15 12:29:06.638251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.521 [2024-11-15 12:29:06.638266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.521 [2024-11-15 12:29:06.638320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.521 [2024-11-15 12:29:06.638349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.521 [2024-11-15 12:29:06.638404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.521 [2024-11-15 12:29:06.638417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.521 #35 NEW cov: 12476 ft: 15789 corp: 25/439b lim: 35 exec/s: 35 rss: 75Mb L: 33/35 MS: 1 ShuffleBytes- 00:07:26.521 [2024-11-15 12:29:06.677772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:418a0a00 cdw11:e4310000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.521 [2024-11-15 12:29:06.677797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.521 #36 NEW cov: 12476 ft: 15816 corp: 26/448b lim: 35 exec/s: 36 rss: 75Mb L: 9/35 MS: 1 ChangeASCIIInt- 00:07:26.521 [2024-11-15 12:29:06.717895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:41410a00 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.521 [2024-11-15 12:29:06.717919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.521 #37 NEW cov: 12476 ft: 15877 corp: 27/457b lim: 35 exec/s: 37 rss: 75Mb L: 9/35 MS: 1 CrossOver- 00:07:26.521 [2024-11-15 12:29:06.758160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8ae40041 cdw11:32320000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.521 [2024-11-15 12:29:06.758185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.521 [2024-11-15 12:29:06.758257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0041340a cdw11:8a4e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.521 [2024-11-15 12:29:06.758274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.521 #40 NEW cov: 12476 ft: 15899 corp: 28/471b lim: 35 exec/s: 40 rss: 75Mb L: 14/35 MS: 3 EraseBytes-ChangeASCIIInt-PersAutoDict- DE: "\000A\212\34422\0174"- 00:07:26.521 [2024-11-15 12:29:06.818504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0aff0a10 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.521 [2024-11-15 12:29:06.818528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.521 [2024-11-15 12:29:06.818600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffbfffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.521 [2024-11-15 12:29:06.818614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.521 [2024-11-15 12:29:06.818671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.521 [2024-11-15 12:29:06.818685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.521 #41 NEW cov: 12476 ft: 15910 corp: 29/497b lim: 35 exec/s: 41 rss: 75Mb L: 26/35 MS: 1 ChangeBit- 00:07:26.521 [2024-11-15 12:29:06.858748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:41410a00 cdw11:416d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.521 [2024-11-15 12:29:06.858772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.521 [2024-11-15 12:29:06.858844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6d6d6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.521 [2024-11-15 12:29:06.858858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.521 [2024-11-15 12:29:06.858913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:6d6d6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.521 [2024-11-15 12:29:06.858926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.521 [2024-11-15 12:29:06.858982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:6d416d6d cdw11:418a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.521 [2024-11-15 12:29:06.858996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.780 #42 NEW cov: 12476 ft: 15942 corp: 30/525b lim: 35 exec/s: 42 rss: 75Mb L: 28/35 MS: 1 InsertRepeatedBytes- 00:07:26.780 [2024-11-15 12:29:06.918996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.780 [2024-11-15 12:29:06.919022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.780 [2024-11-15 12:29:06.919078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.780 [2024-11-15 12:29:06.919091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.780 [2024-11-15 12:29:06.919146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:41414141 cdw11:41320002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.780 [2024-11-15 12:29:06.919159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.780 [2024-11-15 12:29:06.919212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.780 [2024-11-15 12:29:06.919229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.780 #43 NEW cov: 12476 ft: 15953 corp: 31/558b lim: 35 exec/s: 43 rss: 75Mb L: 33/35 MS: 1 ChangeByte- 00:07:26.780 [2024-11-15 12:29:06.958581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:418a0a00 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.780 [2024-11-15 12:29:06.958605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.780 #44 NEW cov: 12476 ft: 15965 corp: 32/567b lim: 35 exec/s: 44 rss: 75Mb L: 9/35 MS: 1 CrossOver- 00:07:26.780 [2024-11-15 12:29:06.999087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8ae40041 cdw11:32320000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.780 [2024-11-15 12:29:06.999111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.780 [2024-11-15 12:29:06.999182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00003400 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.780 [2024-11-15 12:29:06.999196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.780 [2024-11-15 12:29:06.999252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.780 [2024-11-15 12:29:06.999265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.780 #45 NEW cov: 12476 ft: 15994 corp: 33/593b lim: 35 exec/s: 45 rss: 75Mb L: 26/35 MS: 1 InsertRepeatedBytes- 00:07:26.780 [2024-11-15 12:29:07.059379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:45410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.780 [2024-11-15 12:29:07.059406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.780 [2024-11-15 12:29:07.059480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.780 [2024-11-15 12:29:07.059495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.780 [2024-11-15 12:29:07.059551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.780 [2024-11-15 12:29:07.059564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.780 [2024-11-15 12:29:07.059620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.780 [2024-11-15 12:29:07.059634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.780 #46 NEW cov: 12476 ft: 16031 corp: 34/626b lim: 35 exec/s: 46 rss: 75Mb L: 33/35 MS: 1 ShuffleBytes- 00:07:26.780 [2024-11-15 12:29:07.119024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:09000a00 cdw11:e4000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.780 [2024-11-15 12:29:07.119051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.039 #52 NEW cov: 12476 ft: 16054 corp: 35/635b lim: 35 exec/s: 52 rss: 75Mb L: 9/35 MS: 1 ChangeBit- 00:07:27.039 [2024-11-15 12:29:07.179232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:09000a00 cdw11:e4000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.039 [2024-11-15 12:29:07.179258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.039 #53 NEW cov: 12476 ft: 16097 corp: 36/644b lim: 35 exec/s: 26 rss: 75Mb L: 9/35 MS: 1 ChangeBit- 00:07:27.039 #53 DONE cov: 12476 ft: 16097 corp: 36/644b lim: 35 exec/s: 26 rss: 75Mb 00:07:27.039 ###### Recommended dictionary. ###### 00:07:27.039 "\000A\212\34422\0174" # Uses: 2 00:07:27.039 "\016\000\000\000\000\000\000\000" # Uses: 0 00:07:27.039 "\000\000" # Uses: 0 00:07:27.039 ###### End of recommended dictionary. ###### 00:07:27.039 Done 53 runs in 2 second(s) 00:07:27.039 12:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:07:27.039 12:29:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:27.039 12:29:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:27.039 12:29:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:27.039 12:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:07:27.039 12:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:27.039 12:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:27.039 12:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:27.039 12:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:07:27.039 12:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:27.039 12:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:27.039 12:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:07:27.039 12:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:07:27.039 12:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:27.039 12:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:07:27.039 12:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:27.039 12:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:27.039 12:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:27.039 12:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:07:27.039 [2024-11-15 12:29:07.378239] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:27.039 [2024-11-15 12:29:07.378320] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671721 ] 00:07:27.607 [2024-11-15 12:29:07.708142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.607 [2024-11-15 12:29:07.761188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.607 [2024-11-15 12:29:07.820456] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.607 [2024-11-15 12:29:07.836679] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:07:27.607 INFO: Running with entropic power schedule (0xFF, 100). 00:07:27.607 INFO: Seed: 519053958 00:07:27.607 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:07:27.607 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:07:27.607 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:27.607 INFO: A corpus is not provided, starting from an empty corpus 00:07:27.607 #2 INITED exec/s: 0 rss: 67Mb 00:07:27.607 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:27.607 This may also happen if the target rejected all inputs we tried so far 00:07:27.607 [2024-11-15 12:29:07.892167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:63630a63 cdw11:63630003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.607 [2024-11-15 12:29:07.892198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.864 NEW_FUNC[1/716]: 0x443ef8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:27.864 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:27.864 #11 NEW cov: 12260 ft: 12252 corp: 2/18b lim: 45 exec/s: 0 rss: 74Mb L: 17/17 MS: 4 CrossOver-EraseBytes-CrossOver-InsertRepeatedBytes- 00:07:28.122 [2024-11-15 12:29:08.223233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.122 [2024-11-15 12:29:08.223274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.122 [2024-11-15 12:29:08.223330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.123 [2024-11-15 12:29:08.223361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.123 [2024-11-15 12:29:08.223415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.123 [2024-11-15 12:29:08.223428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.123 #13 NEW cov: 12373 ft: 13556 corp: 3/51b lim: 45 exec/s: 0 rss: 74Mb L: 33/33 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:07:28.123 [2024-11-15 12:29:08.263254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.123 [2024-11-15 12:29:08.263281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.123 [2024-11-15 12:29:08.263352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.123 [2024-11-15 12:29:08.263366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.123 [2024-11-15 12:29:08.263419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.123 [2024-11-15 12:29:08.263433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.123 #14 NEW cov: 12379 ft: 13835 corp: 4/86b lim: 45 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 CMP- DE: "\001\000"- 00:07:28.123 [2024-11-15 12:29:08.323417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.123 [2024-11-15 12:29:08.323444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.123 [2024-11-15 12:29:08.323496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.123 [2024-11-15 12:29:08.323509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.123 [2024-11-15 12:29:08.323562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.123 [2024-11-15 12:29:08.323578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.123 #20 NEW cov: 12464 ft: 14092 corp: 5/121b lim: 45 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:07:28.123 [2024-11-15 12:29:08.383611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.123 [2024-11-15 12:29:08.383637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.123 [2024-11-15 12:29:08.383691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.123 [2024-11-15 12:29:08.383704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.123 [2024-11-15 12:29:08.383755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.123 [2024-11-15 12:29:08.383769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.123 #21 NEW cov: 12464 ft: 14176 corp: 6/156b lim: 45 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:07:28.123 [2024-11-15 12:29:08.423693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.123 [2024-11-15 12:29:08.423719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.123 [2024-11-15 12:29:08.423773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.123 [2024-11-15 12:29:08.423787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.123 [2024-11-15 12:29:08.423838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.123 [2024-11-15 12:29:08.423852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.381 #22 NEW cov: 12464 ft: 14200 corp: 7/191b lim: 45 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:07:28.381 [2024-11-15 12:29:08.483616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:63630a63 cdw11:63630003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.381 [2024-11-15 12:29:08.483642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.381 #23 NEW cov: 12464 ft: 14261 corp: 8/208b lim: 45 exec/s: 0 rss: 74Mb L: 17/35 MS: 1 ChangeByte- 00:07:28.381 [2024-11-15 12:29:08.544056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.381 [2024-11-15 12:29:08.544083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.381 [2024-11-15 12:29:08.544137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.381 [2024-11-15 12:29:08.544150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.381 [2024-11-15 12:29:08.544203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.381 [2024-11-15 12:29:08.544232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.381 #24 NEW cov: 12464 ft: 14327 corp: 9/243b lim: 45 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 ChangeByte- 00:07:28.381 [2024-11-15 12:29:08.583845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:63630a63 cdw11:630a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.381 [2024-11-15 12:29:08.583875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.381 #25 NEW cov: 12464 ft: 14433 corp: 10/253b lim: 45 exec/s: 0 rss: 74Mb L: 10/35 MS: 1 CrossOver- 00:07:28.381 [2024-11-15 12:29:08.644221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:63630a63 cdw11:63630003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.381 [2024-11-15 12:29:08.644247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.381 [2024-11-15 12:29:08.644301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:63636363 cdw11:63600003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.381 [2024-11-15 12:29:08.644321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.381 #26 NEW cov: 12464 ft: 14740 corp: 11/271b lim: 45 exec/s: 0 rss: 74Mb L: 18/35 MS: 1 InsertByte- 00:07:28.381 [2024-11-15 12:29:08.684607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.381 [2024-11-15 12:29:08.684632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.381 [2024-11-15 12:29:08.684688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.381 [2024-11-15 12:29:08.684702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.381 [2024-11-15 12:29:08.684757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e20000 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.381 [2024-11-15 12:29:08.684770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.381 [2024-11-15 12:29:08.684823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.381 [2024-11-15 12:29:08.684836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.640 #27 NEW cov: 12464 ft: 15070 corp: 12/313b lim: 45 exec/s: 0 rss: 75Mb L: 42/42 MS: 1 InsertRepeatedBytes- 00:07:28.640 [2024-11-15 12:29:08.744707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.640 [2024-11-15 12:29:08.744735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.640 [2024-11-15 12:29:08.744790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.640 [2024-11-15 12:29:08.744804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.640 [2024-11-15 12:29:08.744857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.640 [2024-11-15 12:29:08.744871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.640 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:28.640 #28 NEW cov: 12487 ft: 15111 corp: 13/348b lim: 45 exec/s: 0 rss: 75Mb L: 35/42 MS: 1 ShuffleBytes- 00:07:28.640 [2024-11-15 12:29:08.804944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.640 [2024-11-15 12:29:08.804972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.641 [2024-11-15 12:29:08.805028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0be20000 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.641 [2024-11-15 12:29:08.805042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.641 [2024-11-15 12:29:08.805094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.641 [2024-11-15 12:29:08.805107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.641 [2024-11-15 12:29:08.805159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.641 [2024-11-15 12:29:08.805172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.641 #29 NEW cov: 12487 ft: 15127 corp: 14/391b lim: 45 exec/s: 0 rss: 75Mb L: 43/43 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\013"- 00:07:28.641 [2024-11-15 12:29:08.845056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.641 [2024-11-15 12:29:08.845084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.641 [2024-11-15 12:29:08.845140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.641 [2024-11-15 12:29:08.845154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.641 [2024-11-15 12:29:08.845207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e20000 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.641 [2024-11-15 12:29:08.845220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.641 [2024-11-15 12:29:08.845275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.641 [2024-11-15 12:29:08.845288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.641 #30 NEW cov: 12487 ft: 15167 corp: 15/433b lim: 45 exec/s: 30 rss: 75Mb L: 42/43 MS: 1 ChangeBit- 00:07:28.641 [2024-11-15 12:29:08.905045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.641 [2024-11-15 12:29:08.905071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.641 [2024-11-15 12:29:08.905125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.641 [2024-11-15 12:29:08.905139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.641 [2024-11-15 12:29:08.905192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.641 [2024-11-15 12:29:08.905205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.641 #31 NEW cov: 12487 ft: 15180 corp: 16/468b lim: 45 exec/s: 31 rss: 75Mb L: 35/43 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\013"- 00:07:28.641 [2024-11-15 12:29:08.945006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:63e20a63 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.641 [2024-11-15 12:29:08.945034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.641 [2024-11-15 12:29:08.945088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:63636363 cdw11:63630003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.641 [2024-11-15 12:29:08.945102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.641 #32 NEW cov: 12487 ft: 15210 corp: 17/492b lim: 45 exec/s: 32 rss: 75Mb L: 24/43 MS: 1 CrossOver- 00:07:28.900 [2024-11-15 12:29:08.985319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:08.985361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.900 [2024-11-15 12:29:08.985416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:08.985429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.900 [2024-11-15 12:29:08.985482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e2e2f2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:08.985495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.900 #33 NEW cov: 12487 ft: 15246 corp: 18/525b lim: 45 exec/s: 33 rss: 75Mb L: 33/43 MS: 1 ChangeBit- 00:07:28.900 [2024-11-15 12:29:09.025528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:09.025553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.900 [2024-11-15 12:29:09.025622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:09.025636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.900 [2024-11-15 12:29:09.025687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e20000 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:09.025701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.900 [2024-11-15 12:29:09.025758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:0000e201 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:09.025771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.900 #34 NEW cov: 12487 ft: 15277 corp: 19/567b lim: 45 exec/s: 34 rss: 75Mb L: 42/43 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\013"- 00:07:28.900 [2024-11-15 12:29:09.085562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:09.085587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.900 [2024-11-15 12:29:09.085658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:09.085673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.900 [2024-11-15 12:29:09.085727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:09.085744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.900 #35 NEW cov: 12487 ft: 15347 corp: 20/602b lim: 45 exec/s: 35 rss: 75Mb L: 35/43 MS: 1 ChangeBinInt- 00:07:28.900 [2024-11-15 12:29:09.125457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:09.125482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.900 [2024-11-15 12:29:09.125554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:09.125569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.900 [2024-11-15 12:29:09.125622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e20000 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:09.125635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.900 #36 NEW cov: 12487 ft: 15362 corp: 21/631b lim: 45 exec/s: 36 rss: 75Mb L: 29/43 MS: 1 EraseBytes- 00:07:28.900 [2024-11-15 12:29:09.165909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:09.165934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.900 [2024-11-15 12:29:09.166006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e209e2 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:09.166020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.900 [2024-11-15 12:29:09.166074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e20000 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:09.166088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.900 [2024-11-15 12:29:09.166143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:0000e201 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:09.166156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.900 #37 NEW cov: 12487 ft: 15387 corp: 22/673b lim: 45 exec/s: 37 rss: 75Mb L: 42/43 MS: 1 ChangeByte- 00:07:28.900 [2024-11-15 12:29:09.226093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:09.226119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.900 [2024-11-15 12:29:09.226190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e209e2 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:09.226204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.900 [2024-11-15 12:29:09.226257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e20000 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:09.226270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.900 [2024-11-15 12:29:09.226328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:0000e201 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.900 [2024-11-15 12:29:09.226345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.159 #38 NEW cov: 12487 ft: 15416 corp: 23/715b lim: 45 exec/s: 38 rss: 75Mb L: 42/43 MS: 1 ShuffleBytes- 00:07:29.159 [2024-11-15 12:29:09.286093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.159 [2024-11-15 12:29:09.286118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.159 [2024-11-15 12:29:09.286188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.159 [2024-11-15 12:29:09.286202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.159 [2024-11-15 12:29:09.286257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.159 [2024-11-15 12:29:09.286271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.159 #39 NEW cov: 12487 ft: 15449 corp: 24/750b lim: 45 exec/s: 39 rss: 75Mb L: 35/43 MS: 1 ChangeByte- 00:07:29.159 [2024-11-15 12:29:09.326319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.159 [2024-11-15 12:29:09.326344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.159 [2024-11-15 12:29:09.326425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0be20000 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.159 [2024-11-15 12:29:09.326440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.159 [2024-11-15 12:29:09.326493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.159 [2024-11-15 12:29:09.326506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.159 [2024-11-15 12:29:09.326556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.159 [2024-11-15 12:29:09.326570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.159 #40 NEW cov: 12487 ft: 15475 corp: 25/793b lim: 45 exec/s: 40 rss: 75Mb L: 43/43 MS: 1 ChangeByte- 00:07:29.159 [2024-11-15 12:29:09.386528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e200e2e2 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.159 [2024-11-15 12:29:09.386553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.159 [2024-11-15 12:29:09.386609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0be20000 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.159 [2024-11-15 12:29:09.386623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.159 [2024-11-15 12:29:09.386675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.159 [2024-11-15 12:29:09.386688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.159 [2024-11-15 12:29:09.386740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.159 [2024-11-15 12:29:09.386756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.159 #41 NEW cov: 12487 ft: 15488 corp: 26/836b lim: 45 exec/s: 41 rss: 75Mb L: 43/43 MS: 1 ShuffleBytes- 00:07:29.159 [2024-11-15 12:29:09.426588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.159 [2024-11-15 12:29:09.426613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.159 [2024-11-15 12:29:09.426681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e209e2 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.159 [2024-11-15 12:29:09.426695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.159 [2024-11-15 12:29:09.426749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e20000 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.159 [2024-11-15 12:29:09.426762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.159 [2024-11-15 12:29:09.426816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:0000e201 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.159 [2024-11-15 12:29:09.426829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.159 #42 NEW cov: 12487 ft: 15517 corp: 27/878b lim: 45 exec/s: 42 rss: 75Mb L: 42/43 MS: 1 ChangeBinInt- 00:07:29.159 [2024-11-15 12:29:09.466688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.159 [2024-11-15 12:29:09.466713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.159 [2024-11-15 12:29:09.466784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0be20000 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.159 [2024-11-15 12:29:09.466798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.159 [2024-11-15 12:29:09.466851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.159 [2024-11-15 12:29:09.466864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.159 [2024-11-15 12:29:09.466916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.159 [2024-11-15 12:29:09.466929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.418 #43 NEW cov: 12487 ft: 15523 corp: 28/921b lim: 45 exec/s: 43 rss: 75Mb L: 43/43 MS: 1 CopyPart- 00:07:29.418 [2024-11-15 12:29:09.526874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.418 [2024-11-15 12:29:09.526899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.418 [2024-11-15 12:29:09.526954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e209e2 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.418 [2024-11-15 12:29:09.526967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.418 [2024-11-15 12:29:09.527020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e20000 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.418 [2024-11-15 12:29:09.527036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.418 [2024-11-15 12:29:09.527090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:0000e201 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.418 [2024-11-15 12:29:09.527103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.418 #44 NEW cov: 12487 ft: 15535 corp: 29/963b lim: 45 exec/s: 44 rss: 75Mb L: 42/43 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:29.418 [2024-11-15 12:29:09.566960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.418 [2024-11-15 12:29:09.566985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.418 [2024-11-15 12:29:09.567040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.418 [2024-11-15 12:29:09.567054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.418 [2024-11-15 12:29:09.567122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e200e2e2 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.418 [2024-11-15 12:29:09.567136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.418 [2024-11-15 12:29:09.567190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e2e200e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.418 [2024-11-15 12:29:09.567203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.418 #45 NEW cov: 12487 ft: 15565 corp: 30/1000b lim: 45 exec/s: 45 rss: 75Mb L: 37/43 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\013"- 00:07:29.418 [2024-11-15 12:29:09.627139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0100e2e2 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.418 [2024-11-15 12:29:09.627165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.418 [2024-11-15 12:29:09.627220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e20be2 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.418 [2024-11-15 12:29:09.627234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.418 [2024-11-15 12:29:09.627304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e20000 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.418 [2024-11-15 12:29:09.627321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.418 [2024-11-15 12:29:09.627377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:0000e201 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.418 [2024-11-15 12:29:09.627391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.418 #46 NEW cov: 12487 ft: 15596 corp: 31/1042b lim: 45 exec/s: 46 rss: 75Mb L: 42/43 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\013"- 00:07:29.418 [2024-11-15 12:29:09.667306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.418 [2024-11-15 12:29:09.667337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.418 [2024-11-15 12:29:09.667396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0be20000 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.418 [2024-11-15 12:29:09.667410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.418 [2024-11-15 12:29:09.667465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000b0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.418 [2024-11-15 12:29:09.667478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.418 [2024-11-15 12:29:09.667532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.418 [2024-11-15 12:29:09.667545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.418 #47 NEW cov: 12487 ft: 15607 corp: 32/1085b lim: 45 exec/s: 47 rss: 75Mb L: 43/43 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\013"- 00:07:29.418 [2024-11-15 12:29:09.727147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:63630a63 cdw11:01000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.418 [2024-11-15 12:29:09.727172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.418 [2024-11-15 12:29:09.727227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:63636363 cdw11:63600003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.418 [2024-11-15 12:29:09.727241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.677 #48 NEW cov: 12487 ft: 15616 corp: 33/1103b lim: 45 exec/s: 48 rss: 76Mb L: 18/43 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:29.677 [2024-11-15 12:29:09.787680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:3e3e633e cdw11:3e3e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.677 [2024-11-15 12:29:09.787707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.677 [2024-11-15 12:29:09.787762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3e3e3e3e cdw11:3e3e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.677 [2024-11-15 12:29:09.787776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.677 [2024-11-15 12:29:09.787831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:3e3e3e3e cdw11:3e3e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.677 [2024-11-15 12:29:09.787844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.677 [2024-11-15 12:29:09.787897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:3e3e3e3e cdw11:3e3e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.677 [2024-11-15 12:29:09.787911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.677 #53 NEW cov: 12487 ft: 15624 corp: 34/1145b lim: 45 exec/s: 53 rss: 76Mb L: 42/43 MS: 5 EraseBytes-CrossOver-ChangeBinInt-ChangeBit-InsertRepeatedBytes- 00:07:29.677 [2024-11-15 12:29:09.847812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.677 [2024-11-15 12:29:09.847838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.677 [2024-11-15 12:29:09.847909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e2e2ffe2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.677 [2024-11-15 12:29:09.847928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.677 [2024-11-15 12:29:09.847982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.677 [2024-11-15 12:29:09.847996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.677 [2024-11-15 12:29:09.848048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:0100e2e2 cdw11:e2e20007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.677 [2024-11-15 12:29:09.848061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.677 #54 NEW cov: 12487 ft: 15629 corp: 35/1183b lim: 45 exec/s: 27 rss: 76Mb L: 38/43 MS: 1 InsertRepeatedBytes- 00:07:29.677 #54 DONE cov: 12487 ft: 15629 corp: 35/1183b lim: 45 exec/s: 27 rss: 76Mb 00:07:29.677 ###### Recommended dictionary. ###### 00:07:29.677 "\001\000" # Uses: 3 00:07:29.677 "\001\000\000\000\000\000\000\013" # Uses: 5 00:07:29.677 ###### End of recommended dictionary. ###### 00:07:29.677 Done 54 runs in 2 second(s) 00:07:29.677 12:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:07:29.677 12:29:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:29.677 12:29:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.677 12:29:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:29.677 12:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:29.677 12:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:29.677 12:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:29.677 12:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:29.677 12:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:29.677 12:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:29.677 12:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:29.677 12:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:07:29.677 12:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:07:29.677 12:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:29.677 12:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:29.677 12:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:29.936 12:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:29.936 12:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:29.936 12:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:07:29.936 [2024-11-15 12:29:10.052604] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:29.936 [2024-11-15 12:29:10.052682] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672074 ] 00:07:30.194 [2024-11-15 12:29:10.367655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.194 [2024-11-15 12:29:10.428120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.194 [2024-11-15 12:29:10.488006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.194 [2024-11-15 12:29:10.504244] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:30.194 INFO: Running with entropic power schedule (0xFF, 100). 00:07:30.194 INFO: Seed: 3184063912 00:07:30.451 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:07:30.451 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:07:30.451 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:30.451 INFO: A corpus is not provided, starting from an empty corpus 00:07:30.451 #2 INITED exec/s: 0 rss: 66Mb 00:07:30.451 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:30.451 This may also happen if the target rejected all inputs we tried so far 00:07:30.451 [2024-11-15 12:29:10.563035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:30.451 [2024-11-15 12:29:10.563069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.451 [2024-11-15 12:29:10.563139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000065 cdw11:00000000 00:07:30.451 [2024-11-15 12:29:10.563153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.708 NEW_FUNC[1/714]: 0x446708 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:30.708 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:30.708 #4 NEW cov: 12177 ft: 12178 corp: 2/5b lim: 10 exec/s: 0 rss: 74Mb L: 4/4 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:30.708 [2024-11-15 12:29:10.893890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:30.708 [2024-11-15 12:29:10.893938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.708 [2024-11-15 12:29:10.893997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:30.708 [2024-11-15 12:29:10.894013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.708 #5 NEW cov: 12290 ft: 12766 corp: 3/9b lim: 10 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:07:30.709 [2024-11-15 12:29:10.954077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a10 cdw11:00000000 00:07:30.709 [2024-11-15 12:29:10.954105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.709 [2024-11-15 12:29:10.954159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001010 cdw11:00000000 00:07:30.709 [2024-11-15 12:29:10.954173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.709 [2024-11-15 12:29:10.954226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00001010 cdw11:00000000 00:07:30.709 [2024-11-15 12:29:10.954239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.709 #6 NEW cov: 12296 ft: 13258 corp: 4/15b lim: 10 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:07:30.709 [2024-11-15 12:29:10.993902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:30.709 [2024-11-15 12:29:10.993928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.709 #7 NEW cov: 12381 ft: 13706 corp: 5/18b lim: 10 exec/s: 0 rss: 74Mb L: 3/6 MS: 1 EraseBytes- 00:07:30.709 [2024-11-15 12:29:11.034110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:30.709 [2024-11-15 12:29:11.034139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.709 [2024-11-15 12:29:11.034212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007965 cdw11:00000000 00:07:30.709 [2024-11-15 12:29:11.034226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.967 #8 NEW cov: 12381 ft: 13841 corp: 6/22b lim: 10 exec/s: 0 rss: 74Mb L: 4/6 MS: 1 ChangeByte- 00:07:30.967 [2024-11-15 12:29:11.074251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:30.967 [2024-11-15 12:29:11.074277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.967 [2024-11-15 12:29:11.074351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00008000 cdw11:00000000 00:07:30.967 [2024-11-15 12:29:11.074366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.967 #9 NEW cov: 12381 ft: 13884 corp: 7/26b lim: 10 exec/s: 0 rss: 74Mb L: 4/6 MS: 1 ChangeBit- 00:07:30.967 [2024-11-15 12:29:11.134269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000313b cdw11:00000000 00:07:30.967 [2024-11-15 12:29:11.134295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.967 #11 NEW cov: 12381 ft: 14026 corp: 8/28b lim: 10 exec/s: 0 rss: 74Mb L: 2/6 MS: 2 ChangeByte-InsertByte- 00:07:30.967 [2024-11-15 12:29:11.174872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:30.967 [2024-11-15 12:29:11.174899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.967 [2024-11-15 12:29:11.174954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:30.967 [2024-11-15 12:29:11.174969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.967 [2024-11-15 12:29:11.175022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:30.967 [2024-11-15 12:29:11.175035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.967 [2024-11-15 12:29:11.175088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:30.967 [2024-11-15 12:29:11.175101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.967 [2024-11-15 12:29:11.175152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000065 cdw11:00000000 00:07:30.967 [2024-11-15 12:29:11.175165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:30.967 #12 NEW cov: 12381 ft: 14277 corp: 9/38b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:30.967 [2024-11-15 12:29:11.214484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000021 cdw11:00000000 00:07:30.967 [2024-11-15 12:29:11.214509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.967 #13 NEW cov: 12381 ft: 14323 corp: 10/41b lim: 10 exec/s: 0 rss: 74Mb L: 3/10 MS: 1 ChangeByte- 00:07:30.967 [2024-11-15 12:29:11.274694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000065 cdw11:00000000 00:07:30.967 [2024-11-15 12:29:11.274720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.967 #14 NEW cov: 12381 ft: 14405 corp: 11/43b lim: 10 exec/s: 0 rss: 74Mb L: 2/10 MS: 1 EraseBytes- 00:07:31.224 [2024-11-15 12:29:11.314827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e965 cdw11:00000000 00:07:31.224 [2024-11-15 12:29:11.314854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.224 #15 NEW cov: 12381 ft: 14447 corp: 12/45b lim: 10 exec/s: 0 rss: 74Mb L: 2/10 MS: 1 ChangeByte- 00:07:31.224 [2024-11-15 12:29:11.375085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000800 cdw11:00000000 00:07:31.224 [2024-11-15 12:29:11.375111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.224 [2024-11-15 12:29:11.375165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00008000 cdw11:00000000 00:07:31.224 [2024-11-15 12:29:11.375179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.224 #16 NEW cov: 12381 ft: 14467 corp: 13/49b lim: 10 exec/s: 0 rss: 74Mb L: 4/10 MS: 1 ChangeBinInt- 00:07:31.224 [2024-11-15 12:29:11.435506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2b cdw11:00000000 00:07:31.224 [2024-11-15 12:29:11.435532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.224 [2024-11-15 12:29:11.435585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002b2b cdw11:00000000 00:07:31.224 [2024-11-15 12:29:11.435599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.224 [2024-11-15 12:29:11.435665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002b2b cdw11:00000000 00:07:31.224 [2024-11-15 12:29:11.435679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.224 [2024-11-15 12:29:11.435732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00002b2b cdw11:00000000 00:07:31.224 [2024-11-15 12:29:11.435745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.224 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:31.224 #17 NEW cov: 12404 ft: 14518 corp: 14/58b lim: 10 exec/s: 0 rss: 74Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:31.224 [2024-11-15 12:29:11.475237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000023b cdw11:00000000 00:07:31.224 [2024-11-15 12:29:11.475263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.224 #18 NEW cov: 12404 ft: 14543 corp: 15/60b lim: 10 exec/s: 0 rss: 74Mb L: 2/10 MS: 1 ChangeBinInt- 00:07:31.224 [2024-11-15 12:29:11.535423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000023b cdw11:00000000 00:07:31.224 [2024-11-15 12:29:11.535448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.481 #19 NEW cov: 12404 ft: 14555 corp: 16/62b lim: 10 exec/s: 19 rss: 74Mb L: 2/10 MS: 1 CopyPart- 00:07:31.481 [2024-11-15 12:29:11.595579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000099 cdw11:00000000 00:07:31.481 [2024-11-15 12:29:11.595604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.481 #20 NEW cov: 12404 ft: 14562 corp: 17/64b lim: 10 exec/s: 20 rss: 74Mb L: 2/10 MS: 1 ChangeByte- 00:07:31.481 [2024-11-15 12:29:11.635682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000199 cdw11:00000000 00:07:31.481 [2024-11-15 12:29:11.635710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.481 #21 NEW cov: 12404 ft: 14582 corp: 18/66b lim: 10 exec/s: 21 rss: 74Mb L: 2/10 MS: 1 ChangeBit- 00:07:31.481 [2024-11-15 12:29:11.696210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:31.481 [2024-11-15 12:29:11.696236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.481 [2024-11-15 12:29:11.696287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:31.481 [2024-11-15 12:29:11.696301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.481 [2024-11-15 12:29:11.696353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:31.481 [2024-11-15 12:29:11.696367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.481 [2024-11-15 12:29:11.696435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000313b cdw11:00000000 00:07:31.481 [2024-11-15 12:29:11.696448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.481 #22 NEW cov: 12404 ft: 14615 corp: 19/74b lim: 10 exec/s: 22 rss: 74Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:07:31.481 [2024-11-15 12:29:11.736090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a10 cdw11:00000000 00:07:31.481 [2024-11-15 12:29:11.736117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.481 [2024-11-15 12:29:11.736172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001010 cdw11:00000000 00:07:31.481 [2024-11-15 12:29:11.736186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.481 #23 NEW cov: 12404 ft: 14644 corp: 20/78b lim: 10 exec/s: 23 rss: 75Mb L: 4/10 MS: 1 EraseBytes- 00:07:31.481 [2024-11-15 12:29:11.796157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000023b cdw11:00000000 00:07:31.481 [2024-11-15 12:29:11.796182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.481 #24 NEW cov: 12404 ft: 14659 corp: 21/80b lim: 10 exec/s: 24 rss: 75Mb L: 2/10 MS: 1 ShuffleBytes- 00:07:31.739 [2024-11-15 12:29:11.836527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000023b cdw11:00000000 00:07:31.739 [2024-11-15 12:29:11.836553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.739 [2024-11-15 12:29:11.836606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff01 cdw11:00000000 00:07:31.739 [2024-11-15 12:29:11.836620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.739 [2024-11-15 12:29:11.836671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:31.739 [2024-11-15 12:29:11.836684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.739 #25 NEW cov: 12404 ft: 14748 corp: 22/86b lim: 10 exec/s: 25 rss: 75Mb L: 6/10 MS: 1 CMP- DE: "\377\001\000\000"- 00:07:31.739 [2024-11-15 12:29:11.896452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00006500 cdw11:00000000 00:07:31.739 [2024-11-15 12:29:11.896478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.739 #26 NEW cov: 12404 ft: 14776 corp: 23/88b lim: 10 exec/s: 26 rss: 75Mb L: 2/10 MS: 1 ShuffleBytes- 00:07:31.739 [2024-11-15 12:29:11.936546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000075 cdw11:00000000 00:07:31.739 [2024-11-15 12:29:11.936572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.739 #27 NEW cov: 12404 ft: 14798 corp: 24/90b lim: 10 exec/s: 27 rss: 75Mb L: 2/10 MS: 1 ChangeBit- 00:07:31.739 [2024-11-15 12:29:11.976685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000008b cdw11:00000000 00:07:31.739 [2024-11-15 12:29:11.976711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.739 #28 NEW cov: 12404 ft: 14811 corp: 25/92b lim: 10 exec/s: 28 rss: 75Mb L: 2/10 MS: 1 ChangeBinInt- 00:07:31.739 [2024-11-15 12:29:12.037086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a3b cdw11:00000000 00:07:31.739 [2024-11-15 12:29:12.037112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.739 [2024-11-15 12:29:12.037166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff01 cdw11:00000000 00:07:31.739 [2024-11-15 12:29:12.037179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.739 [2024-11-15 12:29:12.037230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:31.739 [2024-11-15 12:29:12.037243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.739 #29 NEW cov: 12404 ft: 14819 corp: 26/98b lim: 10 exec/s: 29 rss: 75Mb L: 6/10 MS: 1 ChangeBit- 00:07:31.997 [2024-11-15 12:29:12.097414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000009b cdw11:00000000 00:07:31.997 [2024-11-15 12:29:12.097441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.997 [2024-11-15 12:29:12.097493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009b9b cdw11:00000000 00:07:31.997 [2024-11-15 12:29:12.097507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.997 [2024-11-15 12:29:12.097557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009b9b cdw11:00000000 00:07:31.997 [2024-11-15 12:29:12.097570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.997 [2024-11-15 12:29:12.097619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00009b00 cdw11:00000000 00:07:31.997 [2024-11-15 12:29:12.097631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.997 #30 NEW cov: 12404 ft: 14854 corp: 27/107b lim: 10 exec/s: 30 rss: 75Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:31.997 [2024-11-15 12:29:12.137260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000021 cdw11:00000000 00:07:31.997 [2024-11-15 12:29:12.137286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.997 [2024-11-15 12:29:12.137344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00006065 cdw11:00000000 00:07:31.997 [2024-11-15 12:29:12.137359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.997 #31 NEW cov: 12404 ft: 14914 corp: 28/111b lim: 10 exec/s: 31 rss: 75Mb L: 4/10 MS: 1 InsertByte- 00:07:31.997 [2024-11-15 12:29:12.197412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000250a cdw11:00000000 00:07:31.997 [2024-11-15 12:29:12.197442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.997 [2024-11-15 12:29:12.197511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001010 cdw11:00000000 00:07:31.997 [2024-11-15 12:29:12.197525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.997 #32 NEW cov: 12404 ft: 14920 corp: 29/116b lim: 10 exec/s: 32 rss: 75Mb L: 5/10 MS: 1 InsertByte- 00:07:31.997 [2024-11-15 12:29:12.257676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000002ff cdw11:00000000 00:07:31.997 [2024-11-15 12:29:12.257701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.997 [2024-11-15 12:29:12.257755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:31.997 [2024-11-15 12:29:12.257769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.997 [2024-11-15 12:29:12.257839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff3b cdw11:00000000 00:07:31.997 [2024-11-15 12:29:12.257852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.997 #33 NEW cov: 12404 ft: 14921 corp: 30/122b lim: 10 exec/s: 33 rss: 75Mb L: 6/10 MS: 1 InsertRepeatedBytes- 00:07:31.997 [2024-11-15 12:29:12.297669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000250a cdw11:00000000 00:07:31.997 [2024-11-15 12:29:12.297694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.997 [2024-11-15 12:29:12.297749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001010 cdw11:00000000 00:07:31.997 [2024-11-15 12:29:12.297763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.997 #34 NEW cov: 12404 ft: 14928 corp: 31/127b lim: 10 exec/s: 34 rss: 75Mb L: 5/10 MS: 1 ChangeBinInt- 00:07:32.257 [2024-11-15 12:29:12.358118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:32.257 [2024-11-15 12:29:12.358144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.257 [2024-11-15 12:29:12.358199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:32.257 [2024-11-15 12:29:12.358213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.257 [2024-11-15 12:29:12.358264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:32.257 [2024-11-15 12:29:12.358277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.257 [2024-11-15 12:29:12.358332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000313b cdw11:00000000 00:07:32.257 [2024-11-15 12:29:12.358346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.257 #35 NEW cov: 12404 ft: 14956 corp: 32/135b lim: 10 exec/s: 35 rss: 75Mb L: 8/10 MS: 1 CMP- DE: "\377\377\377\377"- 00:07:32.257 [2024-11-15 12:29:12.418129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000003b cdw11:00000000 00:07:32.257 [2024-11-15 12:29:12.418155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.257 [2024-11-15 12:29:12.418206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff01 cdw11:00000000 00:07:32.257 [2024-11-15 12:29:12.418223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.257 [2024-11-15 12:29:12.418276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:32.257 [2024-11-15 12:29:12.418289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.257 #36 NEW cov: 12404 ft: 14960 corp: 33/141b lim: 10 exec/s: 36 rss: 75Mb L: 6/10 MS: 1 ChangeBit- 00:07:32.257 [2024-11-15 12:29:12.457959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000001d9 cdw11:00000000 00:07:32.257 [2024-11-15 12:29:12.457985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.257 #37 NEW cov: 12404 ft: 15015 corp: 34/143b lim: 10 exec/s: 37 rss: 75Mb L: 2/10 MS: 1 ChangeBit- 00:07:32.257 [2024-11-15 12:29:12.518575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:32.257 [2024-11-15 12:29:12.518601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.257 [2024-11-15 12:29:12.518656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:32.257 [2024-11-15 12:29:12.518669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.257 [2024-11-15 12:29:12.518737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:32.257 [2024-11-15 12:29:12.518751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.257 [2024-11-15 12:29:12.518799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000065 cdw11:00000000 00:07:32.257 [2024-11-15 12:29:12.518813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.257 #38 NEW cov: 12404 ft: 15024 corp: 35/151b lim: 10 exec/s: 19 rss: 75Mb L: 8/10 MS: 1 EraseBytes- 00:07:32.257 #38 DONE cov: 12404 ft: 15024 corp: 35/151b lim: 10 exec/s: 19 rss: 75Mb 00:07:32.257 ###### Recommended dictionary. ###### 00:07:32.257 "\377\001\000\000" # Uses: 0 00:07:32.257 "\377\377\377\377" # Uses: 0 00:07:32.257 ###### End of recommended dictionary. ###### 00:07:32.257 Done 38 runs in 2 second(s) 00:07:32.516 12:29:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:07:32.516 12:29:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:32.516 12:29:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.516 12:29:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:32.516 12:29:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:32.516 12:29:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:32.516 12:29:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:32.516 12:29:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:32.516 12:29:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:32.516 12:29:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:32.516 12:29:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:32.516 12:29:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:07:32.516 12:29:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:07:32.516 12:29:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:32.516 12:29:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:32.516 12:29:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:32.516 12:29:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:32.516 12:29:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:32.517 12:29:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:07:32.517 [2024-11-15 12:29:12.726212] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:32.517 [2024-11-15 12:29:12.726291] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672407 ] 00:07:32.774 [2024-11-15 12:29:13.037394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.774 [2024-11-15 12:29:13.087941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.032 [2024-11-15 12:29:13.147437] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.032 [2024-11-15 12:29:13.163692] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:33.032 INFO: Running with entropic power schedule (0xFF, 100). 00:07:33.032 INFO: Seed: 1549070458 00:07:33.032 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:07:33.032 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:07:33.032 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:33.032 INFO: A corpus is not provided, starting from an empty corpus 00:07:33.032 #2 INITED exec/s: 0 rss: 66Mb 00:07:33.032 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:33.032 This may also happen if the target rejected all inputs we tried so far 00:07:33.032 [2024-11-15 12:29:13.212878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a2c cdw11:00000000 00:07:33.032 [2024-11-15 12:29:13.212907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.290 NEW_FUNC[1/714]: 0x447108 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:33.290 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:33.290 #3 NEW cov: 12177 ft: 12174 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 InsertByte- 00:07:33.291 [2024-11-15 12:29:13.543746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:33.291 [2024-11-15 12:29:13.543785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.291 #4 NEW cov: 12290 ft: 12767 corp: 3/5b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CopyPart- 00:07:33.291 [2024-11-15 12:29:13.583744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:33.291 [2024-11-15 12:29:13.583772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.291 #5 NEW cov: 12296 ft: 13087 corp: 4/8b lim: 10 exec/s: 0 rss: 74Mb L: 3/3 MS: 1 CopyPart- 00:07:33.549 [2024-11-15 12:29:13.643943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003a0a cdw11:00000000 00:07:33.549 [2024-11-15 12:29:13.643969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.549 #8 NEW cov: 12381 ft: 13468 corp: 5/10b lim: 10 exec/s: 0 rss: 74Mb L: 2/3 MS: 3 CrossOver-CopyPart-InsertByte- 00:07:33.549 [2024-11-15 12:29:13.684005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a5d cdw11:00000000 00:07:33.549 [2024-11-15 12:29:13.684030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.549 #9 NEW cov: 12381 ft: 13587 corp: 6/12b lim: 10 exec/s: 0 rss: 74Mb L: 2/3 MS: 1 InsertByte- 00:07:33.549 [2024-11-15 12:29:13.724115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a60 cdw11:00000000 00:07:33.549 [2024-11-15 12:29:13.724140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.549 #10 NEW cov: 12381 ft: 13673 corp: 7/14b lim: 10 exec/s: 0 rss: 74Mb L: 2/3 MS: 1 ChangeByte- 00:07:33.549 [2024-11-15 12:29:13.784300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:33.549 [2024-11-15 12:29:13.784331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.549 #11 NEW cov: 12381 ft: 13758 corp: 8/16b lim: 10 exec/s: 0 rss: 74Mb L: 2/3 MS: 1 CopyPart- 00:07:33.549 [2024-11-15 12:29:13.824815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:33.549 [2024-11-15 12:29:13.824842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.549 [2024-11-15 12:29:13.824896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.549 [2024-11-15 12:29:13.824910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.549 [2024-11-15 12:29:13.824963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.549 [2024-11-15 12:29:13.824976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.549 [2024-11-15 12:29:13.825028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.549 [2024-11-15 12:29:13.825041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.549 #12 NEW cov: 12381 ft: 14233 corp: 9/25b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:33.549 [2024-11-15 12:29:13.884867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:33.549 [2024-11-15 12:29:13.884894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.549 [2024-11-15 12:29:13.884947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.549 [2024-11-15 12:29:13.884961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.549 [2024-11-15 12:29:13.885015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff5d cdw11:00000000 00:07:33.549 [2024-11-15 12:29:13.885028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.807 #13 NEW cov: 12381 ft: 14519 corp: 10/31b lim: 10 exec/s: 0 rss: 74Mb L: 6/9 MS: 1 EraseBytes- 00:07:33.807 [2024-11-15 12:29:13.945011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:33.807 [2024-11-15 12:29:13.945037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.807 [2024-11-15 12:29:13.945090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000010a cdw11:00000000 00:07:33.807 [2024-11-15 12:29:13.945110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.807 [2024-11-15 12:29:13.945163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff5d cdw11:00000000 00:07:33.807 [2024-11-15 12:29:13.945176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.807 #14 NEW cov: 12381 ft: 14587 corp: 11/37b lim: 10 exec/s: 0 rss: 74Mb L: 6/9 MS: 1 ChangeBinInt- 00:07:33.807 [2024-11-15 12:29:14.004927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000600a cdw11:00000000 00:07:33.807 [2024-11-15 12:29:14.004953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.807 #15 NEW cov: 12381 ft: 14657 corp: 12/39b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 1 ShuffleBytes- 00:07:33.807 [2024-11-15 12:29:14.065302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:33.807 [2024-11-15 12:29:14.065333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.807 [2024-11-15 12:29:14.065402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002cff cdw11:00000000 00:07:33.807 [2024-11-15 12:29:14.065416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.807 [2024-11-15 12:29:14.065467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff5d cdw11:00000000 00:07:33.807 [2024-11-15 12:29:14.065482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.807 #16 NEW cov: 12381 ft: 14691 corp: 13/45b lim: 10 exec/s: 0 rss: 74Mb L: 6/9 MS: 1 CrossOver- 00:07:33.807 [2024-11-15 12:29:14.105564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:33.807 [2024-11-15 12:29:14.105600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.807 [2024-11-15 12:29:14.105657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.807 [2024-11-15 12:29:14.105671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.807 [2024-11-15 12:29:14.105738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.807 [2024-11-15 12:29:14.105752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.807 [2024-11-15 12:29:14.105803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.807 [2024-11-15 12:29:14.105817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.807 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:33.807 #17 NEW cov: 12404 ft: 14736 corp: 14/54b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:33.807 [2024-11-15 12:29:14.145321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000a0fe cdw11:00000000 00:07:33.807 [2024-11-15 12:29:14.145349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.065 #18 NEW cov: 12404 ft: 14760 corp: 15/56b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 1 ChangeBinInt- 00:07:34.065 [2024-11-15 12:29:14.205519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003a08 cdw11:00000000 00:07:34.065 [2024-11-15 12:29:14.205544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.065 #19 NEW cov: 12404 ft: 14782 corp: 16/58b lim: 10 exec/s: 19 rss: 74Mb L: 2/9 MS: 1 ChangeBit- 00:07:34.065 [2024-11-15 12:29:14.265919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002a0a cdw11:00000000 00:07:34.065 [2024-11-15 12:29:14.265944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.065 [2024-11-15 12:29:14.266013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002cff cdw11:00000000 00:07:34.065 [2024-11-15 12:29:14.266028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.065 [2024-11-15 12:29:14.266079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff5d cdw11:00000000 00:07:34.065 [2024-11-15 12:29:14.266092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.065 #20 NEW cov: 12404 ft: 14829 corp: 17/64b lim: 10 exec/s: 20 rss: 74Mb L: 6/9 MS: 1 ChangeBit- 00:07:34.065 [2024-11-15 12:29:14.326043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:34.065 [2024-11-15 12:29:14.326068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.065 [2024-11-15 12:29:14.326139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002c3d cdw11:00000000 00:07:34.065 [2024-11-15 12:29:14.326153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.065 [2024-11-15 12:29:14.326204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.065 [2024-11-15 12:29:14.326217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.065 #21 NEW cov: 12404 ft: 14852 corp: 18/71b lim: 10 exec/s: 21 rss: 74Mb L: 7/9 MS: 1 InsertByte- 00:07:34.065 [2024-11-15 12:29:14.366140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:34.066 [2024-11-15 12:29:14.366165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.066 [2024-11-15 12:29:14.366218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000010a cdw11:00000000 00:07:34.066 [2024-11-15 12:29:14.366232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.066 [2024-11-15 12:29:14.366299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff59 cdw11:00000000 00:07:34.066 [2024-11-15 12:29:14.366319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.066 #22 NEW cov: 12404 ft: 14889 corp: 19/77b lim: 10 exec/s: 22 rss: 74Mb L: 6/9 MS: 1 ChangeBit- 00:07:34.323 [2024-11-15 12:29:14.426116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a5d cdw11:00000000 00:07:34.323 [2024-11-15 12:29:14.426142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.323 #24 NEW cov: 12404 ft: 14901 corp: 20/79b lim: 10 exec/s: 24 rss: 74Mb L: 2/9 MS: 2 EraseBytes-InsertByte- 00:07:34.323 [2024-11-15 12:29:14.466174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005d0a cdw11:00000000 00:07:34.323 [2024-11-15 12:29:14.466200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.323 #25 NEW cov: 12404 ft: 14937 corp: 21/81b lim: 10 exec/s: 25 rss: 74Mb L: 2/9 MS: 1 ShuffleBytes- 00:07:34.323 [2024-11-15 12:29:14.506276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002c8c cdw11:00000000 00:07:34.323 [2024-11-15 12:29:14.506304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.323 #29 NEW cov: 12404 ft: 14998 corp: 22/83b lim: 10 exec/s: 29 rss: 74Mb L: 2/9 MS: 4 ChangeByte-ChangeByte-ShuffleBytes-CrossOver- 00:07:34.323 [2024-11-15 12:29:14.546529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:34.323 [2024-11-15 12:29:14.546554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.323 [2024-11-15 12:29:14.546607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002c5d cdw11:00000000 00:07:34.323 [2024-11-15 12:29:14.546622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.323 #30 NEW cov: 12404 ft: 15189 corp: 23/87b lim: 10 exec/s: 30 rss: 74Mb L: 4/9 MS: 1 EraseBytes- 00:07:34.323 [2024-11-15 12:29:14.586996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.323 [2024-11-15 12:29:14.587022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.323 [2024-11-15 12:29:14.587073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.323 [2024-11-15 12:29:14.587087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.323 [2024-11-15 12:29:14.587139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.323 [2024-11-15 12:29:14.587153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.323 [2024-11-15 12:29:14.587203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.323 [2024-11-15 12:29:14.587217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.323 [2024-11-15 12:29:14.587269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:07:34.324 [2024-11-15 12:29:14.587282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:34.324 #32 NEW cov: 12404 ft: 15232 corp: 24/97b lim: 10 exec/s: 32 rss: 74Mb L: 10/10 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:34.324 [2024-11-15 12:29:14.626984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000a0fe cdw11:00000000 00:07:34.324 [2024-11-15 12:29:14.627008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.324 [2024-11-15 12:29:14.627060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.324 [2024-11-15 12:29:14.627073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.324 [2024-11-15 12:29:14.627140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.324 [2024-11-15 12:29:14.627154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.324 [2024-11-15 12:29:14.627207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.324 [2024-11-15 12:29:14.627221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.582 #33 NEW cov: 12404 ft: 15242 corp: 25/105b lim: 10 exec/s: 33 rss: 75Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:07:34.582 [2024-11-15 12:29:14.687049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a06 cdw11:00000000 00:07:34.582 [2024-11-15 12:29:14.687074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.582 [2024-11-15 12:29:14.687128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000fef5 cdw11:00000000 00:07:34.582 [2024-11-15 12:29:14.687142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.582 [2024-11-15 12:29:14.687194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000005d cdw11:00000000 00:07:34.582 [2024-11-15 12:29:14.687207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.582 #34 NEW cov: 12404 ft: 15306 corp: 26/111b lim: 10 exec/s: 34 rss: 75Mb L: 6/10 MS: 1 ChangeBinInt- 00:07:34.582 [2024-11-15 12:29:14.726930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a6c cdw11:00000000 00:07:34.582 [2024-11-15 12:29:14.726954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.582 #35 NEW cov: 12404 ft: 15330 corp: 27/113b lim: 10 exec/s: 35 rss: 75Mb L: 2/10 MS: 1 ChangeBit- 00:07:34.582 [2024-11-15 12:29:14.766995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 00:07:34.582 [2024-11-15 12:29:14.767019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.582 #36 NEW cov: 12404 ft: 15359 corp: 28/116b lim: 10 exec/s: 36 rss: 75Mb L: 3/10 MS: 1 InsertByte- 00:07:34.582 [2024-11-15 12:29:14.807242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002cff cdw11:00000000 00:07:34.582 [2024-11-15 12:29:14.807269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.582 [2024-11-15 12:29:14.807323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff5d cdw11:00000000 00:07:34.582 [2024-11-15 12:29:14.807338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.582 #37 NEW cov: 12404 ft: 15395 corp: 29/120b lim: 10 exec/s: 37 rss: 75Mb L: 4/10 MS: 1 EraseBytes- 00:07:34.582 [2024-11-15 12:29:14.867275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002a0a cdw11:00000000 00:07:34.582 [2024-11-15 12:29:14.867301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.582 #38 NEW cov: 12404 ft: 15410 corp: 30/123b lim: 10 exec/s: 38 rss: 75Mb L: 3/10 MS: 1 EraseBytes- 00:07:34.582 [2024-11-15 12:29:14.907746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000feff cdw11:00000000 00:07:34.582 [2024-11-15 12:29:14.907771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.582 [2024-11-15 12:29:14.907824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.582 [2024-11-15 12:29:14.907838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.582 [2024-11-15 12:29:14.907906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a0ff cdw11:00000000 00:07:34.582 [2024-11-15 12:29:14.907921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.582 [2024-11-15 12:29:14.907974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.582 [2024-11-15 12:29:14.907991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.841 #39 NEW cov: 12404 ft: 15428 corp: 31/131b lim: 10 exec/s: 39 rss: 75Mb L: 8/10 MS: 1 ShuffleBytes- 00:07:34.841 [2024-11-15 12:29:14.967595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cff5 cdw11:00000000 00:07:34.841 [2024-11-15 12:29:14.967620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.841 #40 NEW cov: 12404 ft: 15443 corp: 32/133b lim: 10 exec/s: 40 rss: 75Mb L: 2/10 MS: 1 ChangeBinInt- 00:07:34.841 [2024-11-15 12:29:15.007660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003a9c cdw11:00000000 00:07:34.841 [2024-11-15 12:29:15.007685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.841 #41 NEW cov: 12404 ft: 15475 corp: 33/135b lim: 10 exec/s: 41 rss: 75Mb L: 2/10 MS: 1 ChangeByte- 00:07:34.841 [2024-11-15 12:29:15.068008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:34.841 [2024-11-15 12:29:15.068033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.841 [2024-11-15 12:29:15.068087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002c3d cdw11:00000000 00:07:34.841 [2024-11-15 12:29:15.068101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.841 #42 NEW cov: 12404 ft: 15476 corp: 34/140b lim: 10 exec/s: 42 rss: 75Mb L: 5/10 MS: 1 EraseBytes- 00:07:34.841 [2024-11-15 12:29:15.127995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000083a cdw11:00000000 00:07:34.841 [2024-11-15 12:29:15.128020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.841 #43 NEW cov: 12404 ft: 15510 corp: 35/142b lim: 10 exec/s: 43 rss: 75Mb L: 2/10 MS: 1 ShuffleBytes- 00:07:34.841 [2024-11-15 12:29:15.168160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003a08 cdw11:00000000 00:07:34.841 [2024-11-15 12:29:15.168187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.121 #44 NEW cov: 12404 ft: 15518 corp: 36/144b lim: 10 exec/s: 44 rss: 75Mb L: 2/10 MS: 1 ShuffleBytes- 00:07:35.121 [2024-11-15 12:29:15.208368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:35.121 [2024-11-15 12:29:15.208395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.121 [2024-11-15 12:29:15.208464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000605d cdw11:00000000 00:07:35.121 [2024-11-15 12:29:15.208479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.121 #45 NEW cov: 12404 ft: 15547 corp: 37/148b lim: 10 exec/s: 22 rss: 75Mb L: 4/10 MS: 1 ChangeByte- 00:07:35.121 #45 DONE cov: 12404 ft: 15547 corp: 37/148b lim: 10 exec/s: 22 rss: 75Mb 00:07:35.121 Done 45 runs in 2 second(s) 00:07:35.121 12:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:07:35.121 12:29:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:35.121 12:29:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:35.121 12:29:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:35.121 12:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:35.121 12:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:35.121 12:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:35.121 12:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:35.121 12:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:35.121 12:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:35.121 12:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:35.121 12:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:07:35.121 12:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:07:35.121 12:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:35.121 12:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:35.121 12:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:35.121 12:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:35.121 12:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:35.121 12:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:07:35.121 [2024-11-15 12:29:15.413968] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:35.121 [2024-11-15 12:29:15.414043] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672768 ] 00:07:35.430 [2024-11-15 12:29:15.722083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.698 [2024-11-15 12:29:15.783096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.698 [2024-11-15 12:29:15.842797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.698 [2024-11-15 12:29:15.858952] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:35.698 INFO: Running with entropic power schedule (0xFF, 100). 00:07:35.698 INFO: Seed: 4244086755 00:07:35.698 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:07:35.698 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:07:35.698 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:35.698 INFO: A corpus is not provided, starting from an empty corpus 00:07:35.698 [2024-11-15 12:29:15.906927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.698 [2024-11-15 12:29:15.906957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.698 #2 INITED cov: 12205 ft: 12196 corp: 1/1b exec/s: 0 rss: 72Mb 00:07:35.698 [2024-11-15 12:29:15.946946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.698 [2024-11-15 12:29:15.946973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.698 #3 NEW cov: 12318 ft: 12824 corp: 2/2b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ChangeBit- 00:07:35.698 [2024-11-15 12:29:16.007100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.698 [2024-11-15 12:29:16.007125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.956 #4 NEW cov: 12324 ft: 12945 corp: 3/3b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ChangeByte- 00:07:35.956 [2024-11-15 12:29:16.067236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.956 [2024-11-15 12:29:16.067263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.956 #5 NEW cov: 12409 ft: 13241 corp: 4/4b lim: 5 exec/s: 0 rss: 73Mb L: 1/1 MS: 1 CrossOver- 00:07:35.956 [2024-11-15 12:29:16.127618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.956 [2024-11-15 12:29:16.127643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.956 [2024-11-15 12:29:16.127729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.956 [2024-11-15 12:29:16.127743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.956 #6 NEW cov: 12409 ft: 13941 corp: 5/6b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CrossOver- 00:07:35.956 [2024-11-15 12:29:16.167568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.956 [2024-11-15 12:29:16.167592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.956 #7 NEW cov: 12409 ft: 14042 corp: 6/7b lim: 5 exec/s: 0 rss: 73Mb L: 1/2 MS: 1 ChangeByte- 00:07:35.956 [2024-11-15 12:29:16.227720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.956 [2024-11-15 12:29:16.227745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.956 #8 NEW cov: 12409 ft: 14264 corp: 7/8b lim: 5 exec/s: 0 rss: 73Mb L: 1/2 MS: 1 CrossOver- 00:07:35.956 [2024-11-15 12:29:16.267939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.956 [2024-11-15 12:29:16.267965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.215 #9 NEW cov: 12409 ft: 14310 corp: 8/9b lim: 5 exec/s: 0 rss: 73Mb L: 1/2 MS: 1 ChangeByte- 00:07:36.215 [2024-11-15 12:29:16.327999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.215 [2024-11-15 12:29:16.328024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.215 #10 NEW cov: 12409 ft: 14411 corp: 9/10b lim: 5 exec/s: 0 rss: 73Mb L: 1/2 MS: 1 ChangeBit- 00:07:36.215 [2024-11-15 12:29:16.388298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.215 [2024-11-15 12:29:16.388327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.215 [2024-11-15 12:29:16.388383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.215 [2024-11-15 12:29:16.388397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.215 #11 NEW cov: 12409 ft: 14522 corp: 10/12b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 InsertByte- 00:07:36.215 [2024-11-15 12:29:16.428416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.215 [2024-11-15 12:29:16.428444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.215 [2024-11-15 12:29:16.428498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.215 [2024-11-15 12:29:16.428511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.215 #12 NEW cov: 12409 ft: 14605 corp: 11/14b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 InsertByte- 00:07:36.215 [2024-11-15 12:29:16.468554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.215 [2024-11-15 12:29:16.468579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.215 [2024-11-15 12:29:16.468631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.215 [2024-11-15 12:29:16.468645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.215 #13 NEW cov: 12409 ft: 14642 corp: 12/16b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 InsertByte- 00:07:36.215 [2024-11-15 12:29:16.508500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.215 [2024-11-15 12:29:16.508526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.215 #14 NEW cov: 12409 ft: 14653 corp: 13/17b lim: 5 exec/s: 0 rss: 73Mb L: 1/2 MS: 1 CrossOver- 00:07:36.215 [2024-11-15 12:29:16.548618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.215 [2024-11-15 12:29:16.548644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.473 #15 NEW cov: 12409 ft: 14681 corp: 14/18b lim: 5 exec/s: 0 rss: 73Mb L: 1/2 MS: 1 EraseBytes- 00:07:36.473 [2024-11-15 12:29:16.608947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.473 [2024-11-15 12:29:16.608973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.473 [2024-11-15 12:29:16.609029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.473 [2024-11-15 12:29:16.609043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.473 #16 NEW cov: 12409 ft: 14688 corp: 15/20b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ChangeByte- 00:07:36.473 [2024-11-15 12:29:16.649097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.473 [2024-11-15 12:29:16.649123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.473 [2024-11-15 12:29:16.649179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.473 [2024-11-15 12:29:16.649192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.473 #17 NEW cov: 12409 ft: 14711 corp: 16/22b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CrossOver- 00:07:36.473 [2024-11-15 12:29:16.689010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.473 [2024-11-15 12:29:16.689039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.473 #18 NEW cov: 12409 ft: 14747 corp: 17/23b lim: 5 exec/s: 0 rss: 73Mb L: 1/2 MS: 1 EraseBytes- 00:07:36.473 [2024-11-15 12:29:16.749343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.473 [2024-11-15 12:29:16.749368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.473 [2024-11-15 12:29:16.749424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.473 [2024-11-15 12:29:16.749438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.473 #19 NEW cov: 12409 ft: 14756 corp: 18/25b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CopyPart- 00:07:36.474 [2024-11-15 12:29:16.789290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.474 [2024-11-15 12:29:16.789321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.731 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:36.731 #20 NEW cov: 12432 ft: 14782 corp: 19/26b lim: 5 exec/s: 20 rss: 74Mb L: 1/2 MS: 1 ChangeByte- 00:07:36.989 [2024-11-15 12:29:17.090569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.989 [2024-11-15 12:29:17.090609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.989 [2024-11-15 12:29:17.090671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.989 [2024-11-15 12:29:17.090685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.989 [2024-11-15 12:29:17.090741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.989 [2024-11-15 12:29:17.090755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.989 #21 NEW cov: 12432 ft: 15028 corp: 20/29b lim: 5 exec/s: 21 rss: 74Mb L: 3/3 MS: 1 InsertByte- 00:07:36.989 [2024-11-15 12:29:17.150480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.990 [2024-11-15 12:29:17.150508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.990 [2024-11-15 12:29:17.150568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.990 [2024-11-15 12:29:17.150583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.990 #22 NEW cov: 12432 ft: 15044 corp: 21/31b lim: 5 exec/s: 22 rss: 74Mb L: 2/3 MS: 1 CopyPart- 00:07:36.990 [2024-11-15 12:29:17.191131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.990 [2024-11-15 12:29:17.191157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.990 [2024-11-15 12:29:17.191217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.990 [2024-11-15 12:29:17.191234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.990 [2024-11-15 12:29:17.191290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.990 [2024-11-15 12:29:17.191303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.990 [2024-11-15 12:29:17.191364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.990 [2024-11-15 12:29:17.191377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.990 [2024-11-15 12:29:17.191434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.990 [2024-11-15 12:29:17.191447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.990 #23 NEW cov: 12432 ft: 15439 corp: 22/36b lim: 5 exec/s: 23 rss: 74Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:36.990 [2024-11-15 12:29:17.230544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.990 [2024-11-15 12:29:17.230569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.990 #24 NEW cov: 12432 ft: 15495 corp: 23/37b lim: 5 exec/s: 24 rss: 74Mb L: 1/5 MS: 1 ChangeBit- 00:07:36.990 [2024-11-15 12:29:17.270865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.990 [2024-11-15 12:29:17.270889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.990 [2024-11-15 12:29:17.270948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.990 [2024-11-15 12:29:17.270962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.990 #25 NEW cov: 12432 ft: 15549 corp: 24/39b lim: 5 exec/s: 25 rss: 74Mb L: 2/5 MS: 1 CrossOver- 00:07:36.990 [2024-11-15 12:29:17.331029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.990 [2024-11-15 12:29:17.331056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.990 [2024-11-15 12:29:17.331116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.990 [2024-11-15 12:29:17.331131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.257 #26 NEW cov: 12432 ft: 15580 corp: 25/41b lim: 5 exec/s: 26 rss: 74Mb L: 2/5 MS: 1 CrossOver- 00:07:37.257 [2024-11-15 12:29:17.391656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.257 [2024-11-15 12:29:17.391681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.258 [2024-11-15 12:29:17.391742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.258 [2024-11-15 12:29:17.391756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.258 [2024-11-15 12:29:17.391834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.258 [2024-11-15 12:29:17.391849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.258 [2024-11-15 12:29:17.391904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.258 [2024-11-15 12:29:17.391917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.258 [2024-11-15 12:29:17.391974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.258 [2024-11-15 12:29:17.391988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.258 #27 NEW cov: 12432 ft: 15591 corp: 26/46b lim: 5 exec/s: 27 rss: 74Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:37.258 [2024-11-15 12:29:17.451388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.258 [2024-11-15 12:29:17.451413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.258 [2024-11-15 12:29:17.451490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.258 [2024-11-15 12:29:17.451505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.258 #28 NEW cov: 12432 ft: 15599 corp: 27/48b lim: 5 exec/s: 28 rss: 74Mb L: 2/5 MS: 1 CopyPart- 00:07:37.258 [2024-11-15 12:29:17.491755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.258 [2024-11-15 12:29:17.491780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.258 [2024-11-15 12:29:17.491840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.258 [2024-11-15 12:29:17.491854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.258 [2024-11-15 12:29:17.491911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.258 [2024-11-15 12:29:17.491924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.258 [2024-11-15 12:29:17.491980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.259 [2024-11-15 12:29:17.491993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.259 #29 NEW cov: 12432 ft: 15609 corp: 28/52b lim: 5 exec/s: 29 rss: 74Mb L: 4/5 MS: 1 CopyPart- 00:07:37.259 [2024-11-15 12:29:17.551598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.259 [2024-11-15 12:29:17.551623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.259 [2024-11-15 12:29:17.551682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.259 [2024-11-15 12:29:17.551696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.259 #30 NEW cov: 12432 ft: 15624 corp: 29/54b lim: 5 exec/s: 30 rss: 75Mb L: 2/5 MS: 1 InsertByte- 00:07:37.523 [2024-11-15 12:29:17.612324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.523 [2024-11-15 12:29:17.612350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.523 [2024-11-15 12:29:17.612411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.523 [2024-11-15 12:29:17.612426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.523 [2024-11-15 12:29:17.612484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.523 [2024-11-15 12:29:17.612497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.523 [2024-11-15 12:29:17.612555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.523 [2024-11-15 12:29:17.612568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.523 [2024-11-15 12:29:17.612627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.523 [2024-11-15 12:29:17.612640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.523 #31 NEW cov: 12432 ft: 15646 corp: 30/59b lim: 5 exec/s: 31 rss: 75Mb L: 5/5 MS: 1 CMP- DE: "\377\377\377\377"- 00:07:37.523 [2024-11-15 12:29:17.652073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.523 [2024-11-15 12:29:17.652098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.523 [2024-11-15 12:29:17.652157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.523 [2024-11-15 12:29:17.652186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.523 [2024-11-15 12:29:17.652244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.523 [2024-11-15 12:29:17.652258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.523 #32 NEW cov: 12432 ft: 15726 corp: 31/62b lim: 5 exec/s: 32 rss: 75Mb L: 3/5 MS: 1 CrossOver- 00:07:37.523 [2024-11-15 12:29:17.712084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.523 [2024-11-15 12:29:17.712109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.523 [2024-11-15 12:29:17.712184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.523 [2024-11-15 12:29:17.712198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.523 #33 NEW cov: 12432 ft: 15739 corp: 32/64b lim: 5 exec/s: 33 rss: 75Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:37.523 [2024-11-15 12:29:17.772719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.523 [2024-11-15 12:29:17.772747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.523 [2024-11-15 12:29:17.772807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.524 [2024-11-15 12:29:17.772821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.524 [2024-11-15 12:29:17.772878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.524 [2024-11-15 12:29:17.772891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.524 [2024-11-15 12:29:17.772949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.524 [2024-11-15 12:29:17.772962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.524 [2024-11-15 12:29:17.773017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.524 [2024-11-15 12:29:17.773030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.524 #34 NEW cov: 12432 ft: 15754 corp: 33/69b lim: 5 exec/s: 34 rss: 75Mb L: 5/5 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:07:37.524 [2024-11-15 12:29:17.832222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.524 [2024-11-15 12:29:17.832247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.782 #35 NEW cov: 12432 ft: 15777 corp: 34/70b lim: 5 exec/s: 35 rss: 75Mb L: 1/5 MS: 1 ChangeByte- 00:07:37.782 [2024-11-15 12:29:17.892733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.782 [2024-11-15 12:29:17.892761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.782 [2024-11-15 12:29:17.892819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.782 [2024-11-15 12:29:17.892833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.782 [2024-11-15 12:29:17.892890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.782 [2024-11-15 12:29:17.892904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.782 #36 NEW cov: 12432 ft: 15814 corp: 35/73b lim: 5 exec/s: 18 rss: 75Mb L: 3/5 MS: 1 CrossOver- 00:07:37.782 #36 DONE cov: 12432 ft: 15814 corp: 35/73b lim: 5 exec/s: 18 rss: 75Mb 00:07:37.782 ###### Recommended dictionary. ###### 00:07:37.782 "\377\377\377\377" # Uses: 1 00:07:37.782 ###### End of recommended dictionary. ###### 00:07:37.782 Done 36 runs in 2 second(s) 00:07:37.782 12:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:07:37.782 12:29:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:37.782 12:29:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:37.782 12:29:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:07:37.782 12:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:07:37.782 12:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:37.782 12:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:37.782 12:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:37.782 12:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:07:37.782 12:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:37.782 12:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:37.782 12:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:07:37.782 12:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:07:37.782 12:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:37.782 12:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:07:37.782 12:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:37.782 12:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:37.782 12:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:37.782 12:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:07:37.782 [2024-11-15 12:29:18.081493] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:37.782 [2024-11-15 12:29:18.081582] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673110 ] 00:07:38.346 [2024-11-15 12:29:18.420142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.346 [2024-11-15 12:29:18.467917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.346 [2024-11-15 12:29:18.527248] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.346 [2024-11-15 12:29:18.543393] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:07:38.346 INFO: Running with entropic power schedule (0xFF, 100). 00:07:38.346 INFO: Seed: 2636116324 00:07:38.346 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:07:38.346 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:07:38.346 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:38.346 INFO: A corpus is not provided, starting from an empty corpus 00:07:38.346 [2024-11-15 12:29:18.602963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.346 [2024-11-15 12:29:18.602992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.346 #2 INITED cov: 12205 ft: 12202 corp: 1/1b exec/s: 0 rss: 72Mb 00:07:38.346 [2024-11-15 12:29:18.643115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.346 [2024-11-15 12:29:18.643142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.346 [2024-11-15 12:29:18.643203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.346 [2024-11-15 12:29:18.643217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.346 #3 NEW cov: 12318 ft: 13540 corp: 2/3b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 InsertByte- 00:07:38.604 [2024-11-15 12:29:18.703477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.604 [2024-11-15 12:29:18.703503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.604 [2024-11-15 12:29:18.703563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.604 [2024-11-15 12:29:18.703577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.604 [2024-11-15 12:29:18.703646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.604 [2024-11-15 12:29:18.703660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.604 #4 NEW cov: 12324 ft: 13868 corp: 3/6b lim: 5 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 InsertByte- 00:07:38.604 [2024-11-15 12:29:18.763466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.604 [2024-11-15 12:29:18.763492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.604 [2024-11-15 12:29:18.763552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.604 [2024-11-15 12:29:18.763566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.604 #5 NEW cov: 12409 ft: 14208 corp: 4/8b lim: 5 exec/s: 0 rss: 73Mb L: 2/3 MS: 1 ShuffleBytes- 00:07:38.604 [2024-11-15 12:29:18.803594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.604 [2024-11-15 12:29:18.803619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.604 [2024-11-15 12:29:18.803678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.604 [2024-11-15 12:29:18.803692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.604 #6 NEW cov: 12409 ft: 14339 corp: 5/10b lim: 5 exec/s: 0 rss: 73Mb L: 2/3 MS: 1 EraseBytes- 00:07:38.604 [2024-11-15 12:29:18.863552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.604 [2024-11-15 12:29:18.863577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.604 #7 NEW cov: 12409 ft: 14454 corp: 6/11b lim: 5 exec/s: 0 rss: 73Mb L: 1/3 MS: 1 CopyPart- 00:07:38.604 [2024-11-15 12:29:18.903897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.604 [2024-11-15 12:29:18.903922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.604 [2024-11-15 12:29:18.903979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.604 [2024-11-15 12:29:18.903993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.604 #8 NEW cov: 12409 ft: 14518 corp: 7/13b lim: 5 exec/s: 0 rss: 73Mb L: 2/3 MS: 1 CopyPart- 00:07:38.862 [2024-11-15 12:29:18.964001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.862 [2024-11-15 12:29:18.964027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.862 [2024-11-15 12:29:18.964103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.862 [2024-11-15 12:29:18.964117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.862 #9 NEW cov: 12409 ft: 14649 corp: 8/15b lim: 5 exec/s: 0 rss: 73Mb L: 2/3 MS: 1 ShuffleBytes- 00:07:38.862 [2024-11-15 12:29:19.004103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.862 [2024-11-15 12:29:19.004128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.862 [2024-11-15 12:29:19.004187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.862 [2024-11-15 12:29:19.004201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.862 #10 NEW cov: 12409 ft: 14727 corp: 9/17b lim: 5 exec/s: 0 rss: 73Mb L: 2/3 MS: 1 CopyPart- 00:07:38.862 [2024-11-15 12:29:19.044082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.862 [2024-11-15 12:29:19.044107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.862 #11 NEW cov: 12409 ft: 14753 corp: 10/18b lim: 5 exec/s: 0 rss: 73Mb L: 1/3 MS: 1 CrossOver- 00:07:38.862 [2024-11-15 12:29:19.104550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.862 [2024-11-15 12:29:19.104575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.862 [2024-11-15 12:29:19.104648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.862 [2024-11-15 12:29:19.104663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.862 [2024-11-15 12:29:19.104722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.862 [2024-11-15 12:29:19.104736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.862 #12 NEW cov: 12409 ft: 14816 corp: 11/21b lim: 5 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 CrossOver- 00:07:38.862 [2024-11-15 12:29:19.164740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.862 [2024-11-15 12:29:19.164765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.863 [2024-11-15 12:29:19.164839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.863 [2024-11-15 12:29:19.164854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.863 [2024-11-15 12:29:19.164912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.863 [2024-11-15 12:29:19.164929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.863 #13 NEW cov: 12409 ft: 14842 corp: 12/24b lim: 5 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 ChangeASCIIInt- 00:07:38.863 [2024-11-15 12:29:19.204892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.863 [2024-11-15 12:29:19.204918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.863 [2024-11-15 12:29:19.204976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.863 [2024-11-15 12:29:19.204990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.863 [2024-11-15 12:29:19.205046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.863 [2024-11-15 12:29:19.205060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.121 #14 NEW cov: 12409 ft: 14886 corp: 13/27b lim: 5 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 InsertByte- 00:07:39.121 [2024-11-15 12:29:19.264725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.121 [2024-11-15 12:29:19.264752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.121 #15 NEW cov: 12409 ft: 14925 corp: 14/28b lim: 5 exec/s: 0 rss: 73Mb L: 1/3 MS: 1 EraseBytes- 00:07:39.121 [2024-11-15 12:29:19.304838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.121 [2024-11-15 12:29:19.304864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.121 #16 NEW cov: 12409 ft: 14968 corp: 15/29b lim: 5 exec/s: 0 rss: 73Mb L: 1/3 MS: 1 ChangeBit- 00:07:39.121 [2024-11-15 12:29:19.344942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.121 [2024-11-15 12:29:19.344968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.121 #17 NEW cov: 12409 ft: 14999 corp: 16/30b lim: 5 exec/s: 0 rss: 73Mb L: 1/3 MS: 1 EraseBytes- 00:07:39.121 [2024-11-15 12:29:19.385423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.121 [2024-11-15 12:29:19.385449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.121 [2024-11-15 12:29:19.385525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.121 [2024-11-15 12:29:19.385540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.121 [2024-11-15 12:29:19.385596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.121 [2024-11-15 12:29:19.385609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.121 #18 NEW cov: 12409 ft: 15050 corp: 17/33b lim: 5 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 ChangeBit- 00:07:39.121 [2024-11-15 12:29:19.445465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.121 [2024-11-15 12:29:19.445495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.121 [2024-11-15 12:29:19.445570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.121 [2024-11-15 12:29:19.445584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.379 #19 NEW cov: 12409 ft: 15088 corp: 18/35b lim: 5 exec/s: 0 rss: 73Mb L: 2/3 MS: 1 ChangeByte- 00:07:39.379 [2024-11-15 12:29:19.485341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.379 [2024-11-15 12:29:19.485368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.636 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:39.636 #20 NEW cov: 12432 ft: 15142 corp: 19/36b lim: 5 exec/s: 20 rss: 74Mb L: 1/3 MS: 1 ChangeByte- 00:07:39.636 [2024-11-15 12:29:19.828482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.636 [2024-11-15 12:29:19.828528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.636 [2024-11-15 12:29:19.828620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.636 [2024-11-15 12:29:19.828638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.637 [2024-11-15 12:29:19.828727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.637 [2024-11-15 12:29:19.828742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.637 #21 NEW cov: 12432 ft: 15221 corp: 20/39b lim: 5 exec/s: 21 rss: 74Mb L: 3/3 MS: 1 CopyPart- 00:07:39.637 [2024-11-15 12:29:19.878426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.637 [2024-11-15 12:29:19.878454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.637 [2024-11-15 12:29:19.878544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.637 [2024-11-15 12:29:19.878560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.637 #22 NEW cov: 12432 ft: 15304 corp: 21/41b lim: 5 exec/s: 22 rss: 74Mb L: 2/3 MS: 1 ChangeBinInt- 00:07:39.637 [2024-11-15 12:29:19.948776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.637 [2024-11-15 12:29:19.948803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.637 [2024-11-15 12:29:19.948904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.637 [2024-11-15 12:29:19.948922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.637 #23 NEW cov: 12432 ft: 15333 corp: 22/43b lim: 5 exec/s: 23 rss: 74Mb L: 2/3 MS: 1 ChangeASCIIInt- 00:07:39.895 [2024-11-15 12:29:19.999656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.895 [2024-11-15 12:29:19.999686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.895 [2024-11-15 12:29:19.999780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.895 [2024-11-15 12:29:19.999797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.895 [2024-11-15 12:29:19.999881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.895 [2024-11-15 12:29:19.999896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.895 [2024-11-15 12:29:19.999985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.895 [2024-11-15 12:29:20.000000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.895 #24 NEW cov: 12432 ft: 15690 corp: 23/47b lim: 5 exec/s: 24 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:07:39.895 [2024-11-15 12:29:20.069751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.895 [2024-11-15 12:29:20.069779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.895 [2024-11-15 12:29:20.069873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.895 [2024-11-15 12:29:20.069890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.895 [2024-11-15 12:29:20.069979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.895 [2024-11-15 12:29:20.069996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.895 #25 NEW cov: 12432 ft: 15702 corp: 24/50b lim: 5 exec/s: 25 rss: 74Mb L: 3/4 MS: 1 InsertByte- 00:07:39.895 [2024-11-15 12:29:20.139871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.895 [2024-11-15 12:29:20.139899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.895 [2024-11-15 12:29:20.140028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.895 [2024-11-15 12:29:20.140044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.895 #26 NEW cov: 12432 ft: 15808 corp: 25/52b lim: 5 exec/s: 26 rss: 74Mb L: 2/4 MS: 1 InsertByte- 00:07:39.895 [2024-11-15 12:29:20.189690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.895 [2024-11-15 12:29:20.189717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.895 #27 NEW cov: 12432 ft: 15819 corp: 26/53b lim: 5 exec/s: 27 rss: 74Mb L: 1/4 MS: 1 EraseBytes- 00:07:40.154 [2024-11-15 12:29:20.260220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.154 [2024-11-15 12:29:20.260248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.154 [2024-11-15 12:29:20.260342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.154 [2024-11-15 12:29:20.260358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.154 #28 NEW cov: 12432 ft: 15836 corp: 27/55b lim: 5 exec/s: 28 rss: 74Mb L: 2/4 MS: 1 CopyPart- 00:07:40.154 [2024-11-15 12:29:20.310881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.154 [2024-11-15 12:29:20.310907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.154 [2024-11-15 12:29:20.310999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.154 [2024-11-15 12:29:20.311014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.154 [2024-11-15 12:29:20.311099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.154 [2024-11-15 12:29:20.311113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.154 #29 NEW cov: 12432 ft: 15877 corp: 28/58b lim: 5 exec/s: 29 rss: 74Mb L: 3/4 MS: 1 ShuffleBytes- 00:07:40.154 [2024-11-15 12:29:20.380408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.154 [2024-11-15 12:29:20.380435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.154 #30 NEW cov: 12432 ft: 15884 corp: 29/59b lim: 5 exec/s: 30 rss: 74Mb L: 1/4 MS: 1 EraseBytes- 00:07:40.154 [2024-11-15 12:29:20.431732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.154 [2024-11-15 12:29:20.431757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.154 [2024-11-15 12:29:20.431849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.154 [2024-11-15 12:29:20.431865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.154 [2024-11-15 12:29:20.431948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.154 [2024-11-15 12:29:20.431962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.154 [2024-11-15 12:29:20.432042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.154 [2024-11-15 12:29:20.432059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.154 #31 NEW cov: 12432 ft: 15892 corp: 30/63b lim: 5 exec/s: 31 rss: 75Mb L: 4/4 MS: 1 InsertByte- 00:07:40.154 [2024-11-15 12:29:20.480781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.154 [2024-11-15 12:29:20.480809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.413 #32 NEW cov: 12432 ft: 15937 corp: 31/64b lim: 5 exec/s: 32 rss: 75Mb L: 1/4 MS: 1 EraseBytes- 00:07:40.413 [2024-11-15 12:29:20.551441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.413 [2024-11-15 12:29:20.551468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.413 [2024-11-15 12:29:20.551554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.413 [2024-11-15 12:29:20.551570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.413 #33 NEW cov: 12432 ft: 15941 corp: 32/66b lim: 5 exec/s: 33 rss: 75Mb L: 2/4 MS: 1 CrossOver- 00:07:40.413 [2024-11-15 12:29:20.602441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.413 [2024-11-15 12:29:20.602469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.413 [2024-11-15 12:29:20.602564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.413 [2024-11-15 12:29:20.602580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.413 [2024-11-15 12:29:20.602668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.413 [2024-11-15 12:29:20.602683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.413 #34 NEW cov: 12432 ft: 15949 corp: 33/69b lim: 5 exec/s: 17 rss: 75Mb L: 3/4 MS: 1 CrossOver- 00:07:40.413 #34 DONE cov: 12432 ft: 15949 corp: 33/69b lim: 5 exec/s: 17 rss: 75Mb 00:07:40.413 Done 34 runs in 2 second(s) 00:07:40.413 12:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:07:40.413 12:29:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:40.413 12:29:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:40.413 12:29:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:07:40.413 12:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:07:40.671 12:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:40.671 12:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:40.671 12:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:40.671 12:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:07:40.671 12:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:40.671 12:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:40.671 12:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:07:40.671 12:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:07:40.671 12:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:40.671 12:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:07:40.672 12:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:40.672 12:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:40.672 12:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:40.672 12:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:07:40.672 [2024-11-15 12:29:20.801407] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:40.672 [2024-11-15 12:29:20.801487] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673491 ] 00:07:40.930 [2024-11-15 12:29:21.116680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.930 [2024-11-15 12:29:21.176679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.930 [2024-11-15 12:29:21.235897] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.930 [2024-11-15 12:29:21.252114] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:07:40.930 INFO: Running with entropic power schedule (0xFF, 100). 00:07:40.930 INFO: Seed: 1047135339 00:07:41.188 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:07:41.188 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:07:41.188 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:41.188 INFO: A corpus is not provided, starting from an empty corpus 00:07:41.188 #2 INITED exec/s: 0 rss: 66Mb 00:07:41.188 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:41.188 This may also happen if the target rejected all inputs we tried so far 00:07:41.188 [2024-11-15 12:29:21.310955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c50a0141 cdw11:8aec9d2d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.188 [2024-11-15 12:29:21.310987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.447 NEW_FUNC[1/715]: 0x448a88 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:07:41.447 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:41.447 #26 NEW cov: 12227 ft: 12220 corp: 2/11b lim: 40 exec/s: 0 rss: 73Mb L: 10/10 MS: 4 ChangeBit-InsertByte-CrossOver-CMP- DE: "\001A\212\354\235-\242j"- 00:07:41.447 [2024-11-15 12:29:21.652207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c50a0141 cdw11:01418aec SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.447 [2024-11-15 12:29:21.652272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.447 [2024-11-15 12:29:21.652372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:9d2da26a cdw11:8aec9d2d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.447 [2024-11-15 12:29:21.652400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.447 #27 NEW cov: 12341 ft: 12959 corp: 3/29b lim: 40 exec/s: 0 rss: 73Mb L: 18/18 MS: 1 PersAutoDict- DE: "\001A\212\354\235-\242j"- 00:07:41.447 [2024-11-15 12:29:21.721881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01418aec cdw11:9d2da26a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.447 [2024-11-15 12:29:21.721909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.447 #30 NEW cov: 12347 ft: 13380 corp: 4/39b lim: 40 exec/s: 0 rss: 73Mb L: 10/18 MS: 3 CrossOver-ShuffleBytes-PersAutoDict- DE: "\001A\212\354\235-\242j"- 00:07:41.447 [2024-11-15 12:29:21.762344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c50a0141 cdw11:8a8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.447 [2024-11-15 12:29:21.762369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.447 [2024-11-15 12:29:21.762447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:8c8c8c8c cdw11:8c8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.447 [2024-11-15 12:29:21.762462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.447 [2024-11-15 12:29:21.762521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:8c8c8c8c cdw11:8c8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.447 [2024-11-15 12:29:21.762535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.447 [2024-11-15 12:29:21.762593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:8c8c8c8c cdw11:8cec9d2d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.447 [2024-11-15 12:29:21.762607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.447 #31 NEW cov: 12432 ft: 14186 corp: 5/73b lim: 40 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:41.705 [2024-11-15 12:29:21.802229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c50a0141 cdw11:01418aec SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.705 [2024-11-15 12:29:21.802254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.705 [2024-11-15 12:29:21.802319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:9d2da26a cdw11:8aec9dad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.705 [2024-11-15 12:29:21.802349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.705 #37 NEW cov: 12432 ft: 14397 corp: 6/91b lim: 40 exec/s: 0 rss: 73Mb L: 18/34 MS: 1 ChangeBit- 00:07:41.705 [2024-11-15 12:29:21.862654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c50a0141 cdw11:8a8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.705 [2024-11-15 12:29:21.862681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.705 [2024-11-15 12:29:21.862744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:8c8c8c8c cdw11:8c8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.705 [2024-11-15 12:29:21.862759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.705 [2024-11-15 12:29:21.862821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:8c8c8c8c cdw11:8cdc8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.705 [2024-11-15 12:29:21.862834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.705 [2024-11-15 12:29:21.862895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:8c8c8c8c cdw11:8c8cec9d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.705 [2024-11-15 12:29:21.862909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.705 #38 NEW cov: 12432 ft: 14498 corp: 7/126b lim: 40 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 InsertByte- 00:07:41.705 [2024-11-15 12:29:21.922461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c500418a cdw11:ecf0f81f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.705 [2024-11-15 12:29:21.922487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.705 #39 NEW cov: 12432 ft: 14570 corp: 8/136b lim: 40 exec/s: 0 rss: 73Mb L: 10/35 MS: 1 CMP- DE: "\000A\212\354\360\370\037T"- 00:07:41.705 [2024-11-15 12:29:21.962538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.705 [2024-11-15 12:29:21.962567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.705 #40 NEW cov: 12432 ft: 14609 corp: 9/148b lim: 40 exec/s: 0 rss: 73Mb L: 12/35 MS: 1 InsertRepeatedBytes- 00:07:41.705 [2024-11-15 12:29:22.002671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:014182ec cdw11:9d2da26a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.705 [2024-11-15 12:29:22.002696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.705 #46 NEW cov: 12432 ft: 14655 corp: 10/158b lim: 40 exec/s: 0 rss: 74Mb L: 10/35 MS: 1 ChangeBinInt- 00:07:41.963 [2024-11-15 12:29:22.063294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c50a0141 cdw11:8a8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.963 [2024-11-15 12:29:22.063324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.963 [2024-11-15 12:29:22.063385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:8c8c8c8c cdw11:8c8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.963 [2024-11-15 12:29:22.063399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.964 [2024-11-15 12:29:22.063459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:8c000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.964 [2024-11-15 12:29:22.063472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.964 [2024-11-15 12:29:22.063531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:008c8c8c cdw11:8cec9d2d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.964 [2024-11-15 12:29:22.063546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.964 #47 NEW cov: 12432 ft: 14757 corp: 11/192b lim: 40 exec/s: 0 rss: 74Mb L: 34/35 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:41.964 [2024-11-15 12:29:22.103086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffff3dff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.964 [2024-11-15 12:29:22.103115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.964 #48 NEW cov: 12432 ft: 14826 corp: 12/204b lim: 40 exec/s: 0 rss: 74Mb L: 12/35 MS: 1 ChangeByte- 00:07:41.964 [2024-11-15 12:29:22.173671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c50a0141 cdw11:8a8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.964 [2024-11-15 12:29:22.173700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.964 [2024-11-15 12:29:22.173768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:8c8c8c8c cdw11:8c8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.964 [2024-11-15 12:29:22.173783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.964 [2024-11-15 12:29:22.173843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:8c000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.964 [2024-11-15 12:29:22.173858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.964 [2024-11-15 12:29:22.173921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:008c8c8c cdw11:8c8cec9d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.964 [2024-11-15 12:29:22.173936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.964 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:41.964 #49 NEW cov: 12455 ft: 14892 corp: 13/239b lim: 40 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:07:41.964 [2024-11-15 12:29:22.233371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c500418a cdw11:ecf0f81f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.964 [2024-11-15 12:29:22.233397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.964 #50 NEW cov: 12455 ft: 15029 corp: 14/250b lim: 40 exec/s: 0 rss: 74Mb L: 11/35 MS: 1 InsertByte- 00:07:41.964 [2024-11-15 12:29:22.293627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c50a0141 cdw11:01418aec SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.964 [2024-11-15 12:29:22.293653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.964 [2024-11-15 12:29:22.293734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:9d2da26a cdw11:01410141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.964 [2024-11-15 12:29:22.293754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.222 #51 NEW cov: 12455 ft: 15044 corp: 15/268b lim: 40 exec/s: 51 rss: 74Mb L: 18/35 MS: 1 CrossOver- 00:07:42.222 [2024-11-15 12:29:22.333998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c50a0141 cdw11:8a8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.222 [2024-11-15 12:29:22.334024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.222 [2024-11-15 12:29:22.334100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:8c8c8c8c cdw11:8c8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.222 [2024-11-15 12:29:22.334115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.222 [2024-11-15 12:29:22.334176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:8c8c8c8c cdw11:8c2ddc8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.222 [2024-11-15 12:29:22.334190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.222 [2024-11-15 12:29:22.334250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:8c8c8c8c cdw11:8c8c8cec SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.222 [2024-11-15 12:29:22.334264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.222 #52 NEW cov: 12455 ft: 15079 corp: 16/304b lim: 40 exec/s: 52 rss: 74Mb L: 36/36 MS: 1 InsertByte- 00:07:42.222 [2024-11-15 12:29:22.393789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01418a16 cdw11:9d2da26a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.222 [2024-11-15 12:29:22.393814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.222 #53 NEW cov: 12455 ft: 15161 corp: 17/314b lim: 40 exec/s: 53 rss: 74Mb L: 10/36 MS: 1 ChangeBinInt- 00:07:42.222 [2024-11-15 12:29:22.434294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c50a0141 cdw11:8a8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.222 [2024-11-15 12:29:22.434323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.222 [2024-11-15 12:29:22.434412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:8c8c8c8c cdw11:8c8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.222 [2024-11-15 12:29:22.434429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.222 [2024-11-15 12:29:22.434506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:8c8c8c8c cdw11:8c2ddc8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.222 [2024-11-15 12:29:22.434520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.223 [2024-11-15 12:29:22.434578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:8c8c8c8c cdw11:8c8c8cec SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.223 [2024-11-15 12:29:22.434591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.223 #54 NEW cov: 12455 ft: 15170 corp: 18/350b lim: 40 exec/s: 54 rss: 74Mb L: 36/36 MS: 1 ShuffleBytes- 00:07:42.223 [2024-11-15 12:29:22.494513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.223 [2024-11-15 12:29:22.494538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.223 [2024-11-15 12:29:22.494600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.223 [2024-11-15 12:29:22.494614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.223 [2024-11-15 12:29:22.494674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.223 [2024-11-15 12:29:22.494688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.223 [2024-11-15 12:29:22.494750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.223 [2024-11-15 12:29:22.494764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.223 #58 NEW cov: 12455 ft: 15244 corp: 19/385b lim: 40 exec/s: 58 rss: 74Mb L: 35/36 MS: 4 InsertByte-ChangeByte-ChangeBinInt-InsertRepeatedBytes- 00:07:42.223 [2024-11-15 12:29:22.534604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c50a0141 cdw11:8a8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.223 [2024-11-15 12:29:22.534628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.223 [2024-11-15 12:29:22.534693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:8c8c8c8c cdw11:8c8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.223 [2024-11-15 12:29:22.534707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.223 [2024-11-15 12:29:22.534770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:2f8c0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.223 [2024-11-15 12:29:22.534784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.223 [2024-11-15 12:29:22.534845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00008c8c cdw11:8c8c8cec SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.223 [2024-11-15 12:29:22.534859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.481 #59 NEW cov: 12455 ft: 15257 corp: 20/421b lim: 40 exec/s: 59 rss: 74Mb L: 36/36 MS: 1 InsertByte- 00:07:42.481 [2024-11-15 12:29:22.594756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c50a0141 cdw11:8a8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.481 [2024-11-15 12:29:22.594788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.481 [2024-11-15 12:29:22.594869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:8c8c8c8c cdw11:8c8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.481 [2024-11-15 12:29:22.594885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.481 [2024-11-15 12:29:22.594946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:8c8c8c8c cdw11:8c2ddc8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.481 [2024-11-15 12:29:22.594960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.481 [2024-11-15 12:29:22.595019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:8c8c8d8c cdw11:8c8c8cec SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.481 [2024-11-15 12:29:22.595033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.481 #60 NEW cov: 12455 ft: 15338 corp: 21/457b lim: 40 exec/s: 60 rss: 74Mb L: 36/36 MS: 1 ChangeBinInt- 00:07:42.481 [2024-11-15 12:29:22.654906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.481 [2024-11-15 12:29:22.654930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.481 [2024-11-15 12:29:22.655005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.481 [2024-11-15 12:29:22.655019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.481 [2024-11-15 12:29:22.655080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.481 [2024-11-15 12:29:22.655093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.481 [2024-11-15 12:29:22.655152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.481 [2024-11-15 12:29:22.655166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.481 #61 NEW cov: 12455 ft: 15451 corp: 22/492b lim: 40 exec/s: 61 rss: 74Mb L: 35/36 MS: 1 ShuffleBytes- 00:07:42.481 [2024-11-15 12:29:22.714789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.481 [2024-11-15 12:29:22.714815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.481 [2024-11-15 12:29:22.714877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.481 [2024-11-15 12:29:22.714891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.481 #63 NEW cov: 12455 ft: 15485 corp: 23/512b lim: 40 exec/s: 63 rss: 74Mb L: 20/36 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:42.481 [2024-11-15 12:29:22.755197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.481 [2024-11-15 12:29:22.755221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.481 [2024-11-15 12:29:22.755287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.481 [2024-11-15 12:29:22.755301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.481 [2024-11-15 12:29:22.755362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.481 [2024-11-15 12:29:22.755376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.481 [2024-11-15 12:29:22.755435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.481 [2024-11-15 12:29:22.755449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.481 #64 NEW cov: 12455 ft: 15489 corp: 24/547b lim: 40 exec/s: 64 rss: 74Mb L: 35/36 MS: 1 ChangeBit- 00:07:42.481 [2024-11-15 12:29:22.794923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:014182ec cdw11:9d3a2da2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.481 [2024-11-15 12:29:22.794948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.740 #65 NEW cov: 12455 ft: 15533 corp: 25/558b lim: 40 exec/s: 65 rss: 75Mb L: 11/36 MS: 1 InsertByte- 00:07:42.740 [2024-11-15 12:29:22.855680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c50a0141 cdw11:8a8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.740 [2024-11-15 12:29:22.855706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.740 [2024-11-15 12:29:22.855768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:8c8c8c8c cdw11:8c8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.740 [2024-11-15 12:29:22.855783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.740 [2024-11-15 12:29:22.855847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:8c000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.740 [2024-11-15 12:29:22.855860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.740 [2024-11-15 12:29:22.855922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:008c8c8c cdw11:8c8cec9d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.740 [2024-11-15 12:29:22.855935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.740 [2024-11-15 12:29:22.855994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:2da20000 cdw11:0000006a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.740 [2024-11-15 12:29:22.856008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:42.740 #66 NEW cov: 12455 ft: 15590 corp: 26/598b lim: 40 exec/s: 66 rss: 75Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:42.740 [2024-11-15 12:29:22.895341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.740 [2024-11-15 12:29:22.895366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.740 [2024-11-15 12:29:22.895431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.740 [2024-11-15 12:29:22.895445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.740 #67 NEW cov: 12455 ft: 15613 corp: 27/618b lim: 40 exec/s: 67 rss: 75Mb L: 20/40 MS: 1 ChangeBinInt- 00:07:42.740 [2024-11-15 12:29:22.955515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.740 [2024-11-15 12:29:22.955541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.740 [2024-11-15 12:29:22.955603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.740 [2024-11-15 12:29:22.955616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.740 #68 NEW cov: 12455 ft: 15625 corp: 28/638b lim: 40 exec/s: 68 rss: 75Mb L: 20/40 MS: 1 EraseBytes- 00:07:42.740 [2024-11-15 12:29:23.015648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c50a0141 cdw11:01410041 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.740 [2024-11-15 12:29:23.015673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.740 [2024-11-15 12:29:23.015751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:8aecf0f8 cdw11:1f549d2d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.740 [2024-11-15 12:29:23.015765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.740 #69 NEW cov: 12455 ft: 15669 corp: 29/656b lim: 40 exec/s: 69 rss: 75Mb L: 18/40 MS: 1 PersAutoDict- DE: "\000A\212\354\360\370\037T"- 00:07:42.740 [2024-11-15 12:29:23.055627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01be8a16 cdw11:9d2da26a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.740 [2024-11-15 12:29:23.055654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.999 #70 NEW cov: 12455 ft: 15695 corp: 30/666b lim: 40 exec/s: 70 rss: 75Mb L: 10/40 MS: 1 ChangeBinInt- 00:07:43.000 [2024-11-15 12:29:23.116211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c50a0141 cdw11:8a8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.000 [2024-11-15 12:29:23.116237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.000 [2024-11-15 12:29:23.116300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:8c8c8c8c cdw11:8c8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.000 [2024-11-15 12:29:23.116318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.000 [2024-11-15 12:29:23.116380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:8c8cad8c cdw11:8c8c2ddc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.000 [2024-11-15 12:29:23.116394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.000 [2024-11-15 12:29:23.116455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:8c8c8c8c cdw11:8c8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.000 [2024-11-15 12:29:23.116468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.000 #71 NEW cov: 12455 ft: 15705 corp: 31/703b lim: 40 exec/s: 71 rss: 75Mb L: 37/40 MS: 1 InsertByte- 00:07:43.000 [2024-11-15 12:29:23.155933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:004182ec cdw11:9d3a2da2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.000 [2024-11-15 12:29:23.155958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.000 #72 NEW cov: 12455 ft: 15715 corp: 32/714b lim: 40 exec/s: 72 rss: 75Mb L: 11/40 MS: 1 ChangeBit- 00:07:43.000 [2024-11-15 12:29:23.216217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.000 [2024-11-15 12:29:23.216243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.000 [2024-11-15 12:29:23.216305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.000 [2024-11-15 12:29:23.216340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.000 #73 NEW cov: 12455 ft: 15728 corp: 33/734b lim: 40 exec/s: 73 rss: 75Mb L: 20/40 MS: 1 CopyPart- 00:07:43.000 [2024-11-15 12:29:23.256609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c50a0141 cdw11:8a8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.000 [2024-11-15 12:29:23.256634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.000 [2024-11-15 12:29:23.256694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:8c8c8c8c cdw11:8c8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.000 [2024-11-15 12:29:23.256708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.000 [2024-11-15 12:29:23.256766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:2f8c0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.000 [2024-11-15 12:29:23.256779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.000 [2024-11-15 12:29:23.256838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00008c8c cdw11:8c8c8cec SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.000 [2024-11-15 12:29:23.256851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.000 #74 NEW cov: 12455 ft: 15733 corp: 34/770b lim: 40 exec/s: 37 rss: 75Mb L: 36/40 MS: 1 ShuffleBytes- 00:07:43.000 #74 DONE cov: 12455 ft: 15733 corp: 34/770b lim: 40 exec/s: 37 rss: 75Mb 00:07:43.000 ###### Recommended dictionary. ###### 00:07:43.000 "\001A\212\354\235-\242j" # Uses: 2 00:07:43.000 "\000A\212\354\360\370\037T" # Uses: 1 00:07:43.000 "\000\000\000\000\000\000\000\000" # Uses: 0 00:07:43.000 ###### End of recommended dictionary. ###### 00:07:43.000 Done 74 runs in 2 second(s) 00:07:43.259 12:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:07:43.259 12:29:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:43.259 12:29:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:43.259 12:29:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:07:43.259 12:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:07:43.259 12:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:43.259 12:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:43.259 12:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:43.259 12:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:07:43.259 12:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:43.259 12:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:43.259 12:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:07:43.259 12:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:07:43.259 12:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:43.259 12:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:07:43.259 12:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:43.259 12:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:43.259 12:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:43.259 12:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:07:43.259 [2024-11-15 12:29:23.459053] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:43.259 [2024-11-15 12:29:23.459139] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673866 ] 00:07:43.518 [2024-11-15 12:29:23.764251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.518 [2024-11-15 12:29:23.821303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.776 [2024-11-15 12:29:23.880782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.776 [2024-11-15 12:29:23.897020] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:07:43.776 INFO: Running with entropic power schedule (0xFF, 100). 00:07:43.776 INFO: Seed: 3692142262 00:07:43.776 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:07:43.776 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:07:43.776 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:43.776 INFO: A corpus is not provided, starting from an empty corpus 00:07:43.776 #2 INITED exec/s: 0 rss: 66Mb 00:07:43.776 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:43.776 This may also happen if the target rejected all inputs we tried so far 00:07:43.776 [2024-11-15 12:29:23.945985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.776 [2024-11-15 12:29:23.946018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.035 NEW_FUNC[1/716]: 0x44a7f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:07:44.035 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:44.035 #24 NEW cov: 12236 ft: 12235 corp: 2/16b lim: 40 exec/s: 0 rss: 73Mb L: 15/15 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:44.035 [2024-11-15 12:29:24.276859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.035 [2024-11-15 12:29:24.276899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.035 #31 NEW cov: 12353 ft: 12884 corp: 3/26b lim: 40 exec/s: 0 rss: 73Mb L: 10/15 MS: 2 ShuffleBytes-CrossOver- 00:07:44.035 [2024-11-15 12:29:24.316872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2c0a5757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.035 [2024-11-15 12:29:24.316898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.035 #32 NEW cov: 12359 ft: 13057 corp: 4/37b lim: 40 exec/s: 0 rss: 74Mb L: 11/15 MS: 1 InsertByte- 00:07:44.035 [2024-11-15 12:29:24.377098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2c0a5757 cdw11:5757570b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.035 [2024-11-15 12:29:24.377128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.293 #33 NEW cov: 12444 ft: 13423 corp: 5/48b lim: 40 exec/s: 0 rss: 74Mb L: 11/15 MS: 1 ChangeBinInt- 00:07:44.293 [2024-11-15 12:29:24.437159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:290a5757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.293 [2024-11-15 12:29:24.437185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.293 #34 NEW cov: 12444 ft: 13598 corp: 6/59b lim: 40 exec/s: 0 rss: 74Mb L: 11/15 MS: 1 ChangeBinInt- 00:07:44.293 [2024-11-15 12:29:24.477816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:290a5757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.293 [2024-11-15 12:29:24.477841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.293 [2024-11-15 12:29:24.477917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.293 [2024-11-15 12:29:24.477931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.294 [2024-11-15 12:29:24.477993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.294 [2024-11-15 12:29:24.478006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.294 [2024-11-15 12:29:24.478064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.294 [2024-11-15 12:29:24.478077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.294 #40 NEW cov: 12444 ft: 14538 corp: 7/94b lim: 40 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:44.294 [2024-11-15 12:29:24.537492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2c0a2357 cdw11:5757570b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.294 [2024-11-15 12:29:24.537520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.294 #41 NEW cov: 12444 ft: 14709 corp: 8/105b lim: 40 exec/s: 0 rss: 74Mb L: 11/35 MS: 1 ChangeByte- 00:07:44.294 [2024-11-15 12:29:24.597617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.294 [2024-11-15 12:29:24.597643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.294 #42 NEW cov: 12444 ft: 14760 corp: 9/115b lim: 40 exec/s: 0 rss: 74Mb L: 10/35 MS: 1 CopyPart- 00:07:44.552 [2024-11-15 12:29:24.637751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.552 [2024-11-15 12:29:24.637777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.552 #43 NEW cov: 12444 ft: 14805 corp: 10/130b lim: 40 exec/s: 0 rss: 74Mb L: 15/35 MS: 1 ChangeBinInt- 00:07:44.552 [2024-11-15 12:29:24.697951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.552 [2024-11-15 12:29:24.697977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.552 #44 NEW cov: 12444 ft: 14828 corp: 11/143b lim: 40 exec/s: 0 rss: 74Mb L: 13/35 MS: 1 CrossOver- 00:07:44.552 [2024-11-15 12:29:24.738581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:290a5757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.552 [2024-11-15 12:29:24.738606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.552 [2024-11-15 12:29:24.738685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.552 [2024-11-15 12:29:24.738700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.552 [2024-11-15 12:29:24.738759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.552 [2024-11-15 12:29:24.738773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.552 [2024-11-15 12:29:24.738832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:5dffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.552 [2024-11-15 12:29:24.738845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.552 #45 NEW cov: 12444 ft: 14843 corp: 12/179b lim: 40 exec/s: 0 rss: 74Mb L: 36/36 MS: 1 InsertByte- 00:07:44.552 [2024-11-15 12:29:24.798207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:300a5757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.552 [2024-11-15 12:29:24.798232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.552 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:44.552 #46 NEW cov: 12467 ft: 14879 corp: 13/193b lim: 40 exec/s: 0 rss: 74Mb L: 14/36 MS: 1 InsertByte- 00:07:44.552 [2024-11-15 12:29:24.858408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2c0a5757 cdw11:5717570b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.552 [2024-11-15 12:29:24.858434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.552 #47 NEW cov: 12467 ft: 14909 corp: 14/204b lim: 40 exec/s: 0 rss: 74Mb L: 11/36 MS: 1 ChangeBit- 00:07:44.810 [2024-11-15 12:29:24.899060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:290a5757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.810 [2024-11-15 12:29:24.899086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.810 [2024-11-15 12:29:24.899153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.810 [2024-11-15 12:29:24.899166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.810 [2024-11-15 12:29:24.899225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.811 [2024-11-15 12:29:24.899239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.811 [2024-11-15 12:29:24.899300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffff3fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.811 [2024-11-15 12:29:24.899318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.811 #48 NEW cov: 12467 ft: 14982 corp: 15/239b lim: 40 exec/s: 0 rss: 74Mb L: 35/36 MS: 1 ChangeByte- 00:07:44.811 [2024-11-15 12:29:24.938608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.811 [2024-11-15 12:29:24.938637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.811 #49 NEW cov: 12467 ft: 14991 corp: 16/250b lim: 40 exec/s: 49 rss: 74Mb L: 11/36 MS: 1 CrossOver- 00:07:44.811 [2024-11-15 12:29:24.978748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2c0a570b cdw11:5717570b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.811 [2024-11-15 12:29:24.978773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.811 #50 NEW cov: 12467 ft: 15028 corp: 17/261b lim: 40 exec/s: 50 rss: 74Mb L: 11/36 MS: 1 ChangeBinInt- 00:07:44.811 [2024-11-15 12:29:25.038929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2c0a5757 cdw11:57575700 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.811 [2024-11-15 12:29:25.038954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.811 #51 NEW cov: 12467 ft: 15066 corp: 18/272b lim: 40 exec/s: 51 rss: 74Mb L: 11/36 MS: 1 ShuffleBytes- 00:07:44.811 [2024-11-15 12:29:25.079019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:300a5757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.811 [2024-11-15 12:29:25.079045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.811 #52 NEW cov: 12467 ft: 15091 corp: 19/287b lim: 40 exec/s: 52 rss: 74Mb L: 15/36 MS: 1 CrossOver- 00:07:44.811 [2024-11-15 12:29:25.139689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:290a5757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.811 [2024-11-15 12:29:25.139715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.811 [2024-11-15 12:29:25.139793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.811 [2024-11-15 12:29:25.139807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.811 [2024-11-15 12:29:25.139869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.811 [2024-11-15 12:29:25.139882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.811 [2024-11-15 12:29:25.139943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ff473fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.811 [2024-11-15 12:29:25.139957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.070 #53 NEW cov: 12467 ft: 15129 corp: 20/322b lim: 40 exec/s: 53 rss: 74Mb L: 35/36 MS: 1 ChangeByte- 00:07:45.070 [2024-11-15 12:29:25.199877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:290a5757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.070 [2024-11-15 12:29:25.199903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.070 [2024-11-15 12:29:25.199968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.070 [2024-11-15 12:29:25.199982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.070 [2024-11-15 12:29:25.200041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.070 [2024-11-15 12:29:25.200055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.070 [2024-11-15 12:29:25.200120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.070 [2024-11-15 12:29:25.200133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.070 #54 NEW cov: 12467 ft: 15169 corp: 21/358b lim: 40 exec/s: 54 rss: 74Mb L: 36/36 MS: 1 CopyPart- 00:07:45.070 [2024-11-15 12:29:25.239498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a235757 cdw11:5757570b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.070 [2024-11-15 12:29:25.239523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.070 #55 NEW cov: 12467 ft: 15205 corp: 22/369b lim: 40 exec/s: 55 rss: 74Mb L: 11/36 MS: 1 CopyPart- 00:07:45.070 [2024-11-15 12:29:25.299662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2c0a572c cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.070 [2024-11-15 12:29:25.299688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.070 #56 NEW cov: 12467 ft: 15223 corp: 23/381b lim: 40 exec/s: 56 rss: 74Mb L: 12/36 MS: 1 CopyPart- 00:07:45.070 [2024-11-15 12:29:25.339760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2c0a57d2 cdw11:0b571757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.070 [2024-11-15 12:29:25.339785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.070 #57 NEW cov: 12467 ft: 15293 corp: 24/393b lim: 40 exec/s: 57 rss: 74Mb L: 12/36 MS: 1 InsertByte- 00:07:45.070 [2024-11-15 12:29:25.400461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:290a5757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.070 [2024-11-15 12:29:25.400487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.070 [2024-11-15 12:29:25.400549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:58ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.070 [2024-11-15 12:29:25.400563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.070 [2024-11-15 12:29:25.400624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.070 [2024-11-15 12:29:25.400638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.070 [2024-11-15 12:29:25.400698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.070 [2024-11-15 12:29:25.400711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.329 #58 NEW cov: 12467 ft: 15318 corp: 25/429b lim: 40 exec/s: 58 rss: 74Mb L: 36/36 MS: 1 ChangeByte- 00:07:45.329 [2024-11-15 12:29:25.460068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:300a5757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.329 [2024-11-15 12:29:25.460093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.329 #59 NEW cov: 12467 ft: 15353 corp: 26/440b lim: 40 exec/s: 59 rss: 75Mb L: 11/36 MS: 1 EraseBytes- 00:07:45.329 [2024-11-15 12:29:25.520256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:300a5757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.329 [2024-11-15 12:29:25.520281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.329 #60 NEW cov: 12467 ft: 15361 corp: 27/454b lim: 40 exec/s: 60 rss: 75Mb L: 14/36 MS: 1 ChangeByte- 00:07:45.329 [2024-11-15 12:29:25.560392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:13575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.329 [2024-11-15 12:29:25.560417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.329 #61 NEW cov: 12467 ft: 15373 corp: 28/467b lim: 40 exec/s: 61 rss: 75Mb L: 13/36 MS: 1 ChangeBinInt- 00:07:45.329 [2024-11-15 12:29:25.600481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:300a5757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.329 [2024-11-15 12:29:25.600508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.329 #62 NEW cov: 12467 ft: 15384 corp: 29/482b lim: 40 exec/s: 62 rss: 75Mb L: 15/36 MS: 1 CopyPart- 00:07:45.329 [2024-11-15 12:29:25.641093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:290a5757 cdw11:57535757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.329 [2024-11-15 12:29:25.641118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.329 [2024-11-15 12:29:25.641179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.329 [2024-11-15 12:29:25.641193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.329 [2024-11-15 12:29:25.641253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.329 [2024-11-15 12:29:25.641266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.329 [2024-11-15 12:29:25.641335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.329 [2024-11-15 12:29:25.641355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.329 #63 NEW cov: 12467 ft: 15407 corp: 30/518b lim: 40 exec/s: 63 rss: 75Mb L: 36/36 MS: 1 ChangeBit- 00:07:45.588 [2024-11-15 12:29:25.680733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.588 [2024-11-15 12:29:25.680758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.588 #64 NEW cov: 12467 ft: 15420 corp: 31/528b lim: 40 exec/s: 64 rss: 75Mb L: 10/36 MS: 1 CopyPart- 00:07:45.588 [2024-11-15 12:29:25.740904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a2c2357 cdw11:5757570b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.588 [2024-11-15 12:29:25.740928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.588 #65 NEW cov: 12467 ft: 15430 corp: 32/539b lim: 40 exec/s: 65 rss: 75Mb L: 11/36 MS: 1 ShuffleBytes- 00:07:45.588 [2024-11-15 12:29:25.780976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:300a5757 cdw11:47575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.588 [2024-11-15 12:29:25.781001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.588 #66 NEW cov: 12467 ft: 15434 corp: 33/550b lim: 40 exec/s: 66 rss: 75Mb L: 11/36 MS: 1 ChangeBit- 00:07:45.588 [2024-11-15 12:29:25.841665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:290a5700 cdw11:00000057 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.588 [2024-11-15 12:29:25.841692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.588 [2024-11-15 12:29:25.841774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.588 [2024-11-15 12:29:25.841788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.588 [2024-11-15 12:29:25.841850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.588 [2024-11-15 12:29:25.841864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.588 [2024-11-15 12:29:25.841924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff3fff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.588 [2024-11-15 12:29:25.841937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.588 #67 NEW cov: 12467 ft: 15451 corp: 34/589b lim: 40 exec/s: 67 rss: 75Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:45.588 [2024-11-15 12:29:25.881296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2c0a5757 cdw11:5717570b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.588 [2024-11-15 12:29:25.881330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.588 #68 NEW cov: 12467 ft: 15458 corp: 35/601b lim: 40 exec/s: 68 rss: 75Mb L: 12/39 MS: 1 InsertByte- 00:07:45.588 [2024-11-15 12:29:25.921384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2c0a570b cdw11:57e3570b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.588 [2024-11-15 12:29:25.921409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.846 #69 NEW cov: 12467 ft: 15467 corp: 36/612b lim: 40 exec/s: 34 rss: 75Mb L: 11/39 MS: 1 ChangeBinInt- 00:07:45.846 #69 DONE cov: 12467 ft: 15467 corp: 36/612b lim: 40 exec/s: 34 rss: 75Mb 00:07:45.846 Done 69 runs in 2 second(s) 00:07:45.846 12:29:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:07:45.846 12:29:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:45.846 12:29:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:45.846 12:29:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:07:45.846 12:29:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:07:45.846 12:29:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:45.846 12:29:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:45.846 12:29:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:45.846 12:29:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:07:45.846 12:29:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:45.846 12:29:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:45.846 12:29:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:07:45.846 12:29:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:07:45.846 12:29:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:45.846 12:29:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:07:45.846 12:29:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:45.846 12:29:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:45.846 12:29:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:45.846 12:29:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:07:45.846 [2024-11-15 12:29:26.094404] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:45.846 [2024-11-15 12:29:26.094474] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674203 ] 00:07:46.105 [2024-11-15 12:29:26.312280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.105 [2024-11-15 12:29:26.350545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.105 [2024-11-15 12:29:26.409825] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.105 [2024-11-15 12:29:26.426054] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:07:46.105 INFO: Running with entropic power schedule (0xFF, 100). 00:07:46.105 INFO: Seed: 1928185547 00:07:46.363 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:07:46.363 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:07:46.363 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:46.363 INFO: A corpus is not provided, starting from an empty corpus 00:07:46.363 #2 INITED exec/s: 0 rss: 66Mb 00:07:46.363 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:46.363 This may also happen if the target rejected all inputs we tried so far 00:07:46.363 [2024-11-15 12:29:26.480985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c0c00a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.363 [2024-11-15 12:29:26.481023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.622 NEW_FUNC[1/716]: 0x44c568 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:46.622 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:46.622 #3 NEW cov: 12238 ft: 12225 corp: 2/9b lim: 40 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:46.622 [2024-11-15 12:29:26.831826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac0c0c0 cdw11:c0c0c0c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.622 [2024-11-15 12:29:26.831871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.622 #4 NEW cov: 12351 ft: 12757 corp: 3/18b lim: 40 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 CrossOver- 00:07:46.622 [2024-11-15 12:29:26.891875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac0c0c0 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.622 [2024-11-15 12:29:26.891910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.622 #5 NEW cov: 12357 ft: 13088 corp: 4/31b lim: 40 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 CMP- DE: "\010\000\000\000"- 00:07:46.882 [2024-11-15 12:29:26.982156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac0c0c0 cdw11:c0c0c00a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.882 [2024-11-15 12:29:26.982188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.882 [2024-11-15 12:29:26.982237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:c0c0c00a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.882 [2024-11-15 12:29:26.982257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.882 #11 NEW cov: 12442 ft: 14006 corp: 5/49b lim: 40 exec/s: 0 rss: 73Mb L: 18/18 MS: 1 CopyPart- 00:07:46.882 [2024-11-15 12:29:27.042305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac0c0c0 cdw11:c00ac0c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.882 [2024-11-15 12:29:27.042346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.882 [2024-11-15 12:29:27.042397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0ac0c0c0 cdw11:c0c0c0c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.882 [2024-11-15 12:29:27.042413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.882 #12 NEW cov: 12442 ft: 14210 corp: 6/68b lim: 40 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 CrossOver- 00:07:46.882 [2024-11-15 12:29:27.132555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac0c0c0 cdw11:c00ac0c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.882 [2024-11-15 12:29:27.132589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.882 [2024-11-15 12:29:27.132639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0ac0c0c0 cdw11:c0c0c0c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.882 [2024-11-15 12:29:27.132656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.882 #13 NEW cov: 12442 ft: 14302 corp: 7/87b lim: 40 exec/s: 0 rss: 74Mb L: 19/19 MS: 1 CrossOver- 00:07:46.882 [2024-11-15 12:29:27.222779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c0c00a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.882 [2024-11-15 12:29:27.222816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.141 #14 NEW cov: 12442 ft: 14366 corp: 8/95b lim: 40 exec/s: 0 rss: 74Mb L: 8/19 MS: 1 CopyPart- 00:07:47.141 [2024-11-15 12:29:27.313052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac0c0c0 cdw11:c0c0c00a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.141 [2024-11-15 12:29:27.313091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.141 [2024-11-15 12:29:27.313125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:c0c0c00a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.141 [2024-11-15 12:29:27.313142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.141 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:47.141 #15 NEW cov: 12465 ft: 14416 corp: 9/117b lim: 40 exec/s: 0 rss: 74Mb L: 22/22 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:07:47.141 [2024-11-15 12:29:27.373089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0ac0c0 cdw11:c0c0c0c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.141 [2024-11-15 12:29:27.373124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.141 #16 NEW cov: 12465 ft: 14467 corp: 10/126b lim: 40 exec/s: 0 rss: 74Mb L: 9/22 MS: 1 CopyPart- 00:07:47.141 [2024-11-15 12:29:27.423223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0ac0c0 cdw11:c0c0c0c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.142 [2024-11-15 12:29:27.423256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.401 #17 NEW cov: 12465 ft: 14495 corp: 11/135b lim: 40 exec/s: 17 rss: 74Mb L: 9/22 MS: 1 CopyPart- 00:07:47.401 [2024-11-15 12:29:27.514466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac0c0c0 cdw11:c0c0c00a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.401 [2024-11-15 12:29:27.514497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.401 [2024-11-15 12:29:27.514570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:c0c0c00a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.401 [2024-11-15 12:29:27.514586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.401 [2024-11-15 12:29:27.514647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:c00ac0c0 cdw11:c0c0c0c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.401 [2024-11-15 12:29:27.514662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.401 #18 NEW cov: 12465 ft: 14802 corp: 12/166b lim: 40 exec/s: 18 rss: 74Mb L: 31/31 MS: 1 CrossOver- 00:07:47.401 [2024-11-15 12:29:27.574611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac0c0c0 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.401 [2024-11-15 12:29:27.574636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.401 [2024-11-15 12:29:27.574711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:0aeeeeee SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.401 [2024-11-15 12:29:27.574725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.401 [2024-11-15 12:29:27.574784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:eeeeeeee cdw11:eeeeeeee SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.401 [2024-11-15 12:29:27.574797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.401 #19 NEW cov: 12465 ft: 14853 corp: 13/191b lim: 40 exec/s: 19 rss: 74Mb L: 25/31 MS: 1 InsertRepeatedBytes- 00:07:47.401 [2024-11-15 12:29:27.634432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c0c0c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.401 [2024-11-15 12:29:27.634460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.401 #22 NEW cov: 12465 ft: 14962 corp: 14/202b lim: 40 exec/s: 22 rss: 74Mb L: 11/31 MS: 3 EraseBytes-CopyPart-CopyPart- 00:07:47.401 [2024-11-15 12:29:27.675031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac0c0c0 cdw11:c0c0c00a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.401 [2024-11-15 12:29:27.675057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.401 [2024-11-15 12:29:27.675133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:c0c0c00a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.401 [2024-11-15 12:29:27.675148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.401 [2024-11-15 12:29:27.675206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24c00ac0 cdw11:c0c0c0c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.401 [2024-11-15 12:29:27.675220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.401 [2024-11-15 12:29:27.675280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:c0c00ac0 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.401 [2024-11-15 12:29:27.675293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.401 #23 NEW cov: 12465 ft: 15278 corp: 15/234b lim: 40 exec/s: 23 rss: 74Mb L: 32/32 MS: 1 InsertByte- 00:07:47.401 [2024-11-15 12:29:27.734706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c0c0c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.401 [2024-11-15 12:29:27.734731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.661 #24 NEW cov: 12465 ft: 15311 corp: 16/245b lim: 40 exec/s: 24 rss: 74Mb L: 11/32 MS: 1 ShuffleBytes- 00:07:47.661 [2024-11-15 12:29:27.795206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac0c0c0 cdw11:0800000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.661 [2024-11-15 12:29:27.795231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.661 [2024-11-15 12:29:27.795307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c0c0c008 cdw11:000000c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.661 [2024-11-15 12:29:27.795327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.661 [2024-11-15 12:29:27.795384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:c0c000c0 cdw11:c0c0c00a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.661 [2024-11-15 12:29:27.795397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.661 #25 NEW cov: 12465 ft: 15359 corp: 17/269b lim: 40 exec/s: 25 rss: 74Mb L: 24/32 MS: 1 CopyPart- 00:07:47.661 [2024-11-15 12:29:27.835290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac0c0c0 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.661 [2024-11-15 12:29:27.835320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.661 [2024-11-15 12:29:27.835400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:0aeeeeee SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.661 [2024-11-15 12:29:27.835415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.661 [2024-11-15 12:29:27.835474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:08000000 cdw11:eeeeeeee SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.661 [2024-11-15 12:29:27.835488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.661 #26 NEW cov: 12465 ft: 15401 corp: 18/298b lim: 40 exec/s: 26 rss: 74Mb L: 29/32 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:07:47.661 [2024-11-15 12:29:27.895185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac0300a cdw11:c0c0c0c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.661 [2024-11-15 12:29:27.895210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.661 #31 NEW cov: 12465 ft: 15496 corp: 19/306b lim: 40 exec/s: 31 rss: 74Mb L: 8/32 MS: 5 InsertByte-CrossOver-InsertByte-ShuffleBytes-CrossOver- 00:07:47.661 [2024-11-15 12:29:27.935769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac0c0c0 cdw11:c0c0c00a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.661 [2024-11-15 12:29:27.935794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.661 [2024-11-15 12:29:27.935871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:c0c0c00a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.661 [2024-11-15 12:29:27.935886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.661 [2024-11-15 12:29:27.935945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:24c0c0c0 cdw11:c0c00ac0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.661 [2024-11-15 12:29:27.935963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.661 [2024-11-15 12:29:27.936021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:c00ac0c0 cdw11:c0c0c0c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.661 [2024-11-15 12:29:27.936035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.661 #32 NEW cov: 12465 ft: 15504 corp: 20/345b lim: 40 exec/s: 32 rss: 74Mb L: 39/39 MS: 1 CrossOver- 00:07:47.661 [2024-11-15 12:29:27.995777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac0c0c0 cdw11:c0c0c00a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.661 [2024-11-15 12:29:27.995802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.661 [2024-11-15 12:29:27.995878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c00ac0c0 cdw11:c0c0c0c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.661 [2024-11-15 12:29:27.995892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.661 [2024-11-15 12:29:27.995952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:c00ac00a cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.661 [2024-11-15 12:29:27.995966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.921 #33 NEW cov: 12465 ft: 15539 corp: 21/369b lim: 40 exec/s: 33 rss: 74Mb L: 24/39 MS: 1 CopyPart- 00:07:47.921 [2024-11-15 12:29:28.035521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac0350a cdw11:c0c0c0c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.921 [2024-11-15 12:29:28.035546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.921 #34 NEW cov: 12465 ft: 15578 corp: 22/377b lim: 40 exec/s: 34 rss: 74Mb L: 8/39 MS: 1 ChangeASCIIInt- 00:07:47.921 [2024-11-15 12:29:28.095737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:0ac0300a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.921 [2024-11-15 12:29:28.095762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.921 #35 NEW cov: 12465 ft: 15619 corp: 23/389b lim: 40 exec/s: 35 rss: 74Mb L: 12/39 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:07:47.921 [2024-11-15 12:29:28.135834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:2dc0c0c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.921 [2024-11-15 12:29:28.135860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.921 #36 NEW cov: 12465 ft: 15644 corp: 24/400b lim: 40 exec/s: 36 rss: 74Mb L: 11/39 MS: 1 ChangeByte- 00:07:47.921 [2024-11-15 12:29:28.176254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac0c0c0 cdw11:c0c0c00a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.921 [2024-11-15 12:29:28.176280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.921 [2024-11-15 12:29:28.176344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:c0c0c00a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.921 [2024-11-15 12:29:28.176360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.921 [2024-11-15 12:29:28.176418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:08000000 cdw11:c0c0c0c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.921 [2024-11-15 12:29:28.176435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.921 #37 NEW cov: 12465 ft: 15649 corp: 25/431b lim: 40 exec/s: 37 rss: 74Mb L: 31/39 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:07:47.921 [2024-11-15 12:29:28.216087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c0c8c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.921 [2024-11-15 12:29:28.216114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.921 #38 NEW cov: 12465 ft: 15652 corp: 26/442b lim: 40 exec/s: 38 rss: 74Mb L: 11/39 MS: 1 ChangeBit- 00:07:48.181 [2024-11-15 12:29:28.276261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c00a0a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.181 [2024-11-15 12:29:28.276287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.181 #39 NEW cov: 12465 ft: 15683 corp: 27/457b lim: 40 exec/s: 39 rss: 74Mb L: 15/39 MS: 1 CopyPart- 00:07:48.181 [2024-11-15 12:29:28.316361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac0300a cdw11:c0c0c030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.181 [2024-11-15 12:29:28.316388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.181 #40 NEW cov: 12465 ft: 15747 corp: 28/470b lim: 40 exec/s: 40 rss: 74Mb L: 13/39 MS: 1 CopyPart- 00:07:48.181 [2024-11-15 12:29:28.356644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac0c0c0 cdw11:c0c0c00a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.181 [2024-11-15 12:29:28.356670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.181 [2024-11-15 12:29:28.356747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c0c0c0e0 cdw11:c0c0c00a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.181 [2024-11-15 12:29:28.356762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.181 #41 NEW cov: 12465 ft: 15779 corp: 29/492b lim: 40 exec/s: 41 rss: 74Mb L: 22/39 MS: 1 ChangeBit- 00:07:48.181 [2024-11-15 12:29:28.396746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac0c0c0 cdw11:c0c0c00a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.181 [2024-11-15 12:29:28.396771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.181 [2024-11-15 12:29:28.396846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c0c0c0f4 cdw11:c0c0c0c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.181 [2024-11-15 12:29:28.396861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.181 #42 NEW cov: 12465 ft: 15796 corp: 30/511b lim: 40 exec/s: 42 rss: 74Mb L: 19/39 MS: 1 InsertByte- 00:07:48.181 [2024-11-15 12:29:28.436893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c0c0c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.181 [2024-11-15 12:29:28.436918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.181 [2024-11-15 12:29:28.436993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c0c00a0a cdw11:c0c0c0c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.181 [2024-11-15 12:29:28.437008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.181 #43 NEW cov: 12465 ft: 15822 corp: 31/530b lim: 40 exec/s: 21 rss: 74Mb L: 19/39 MS: 1 CrossOver- 00:07:48.181 #43 DONE cov: 12465 ft: 15822 corp: 31/530b lim: 40 exec/s: 21 rss: 74Mb 00:07:48.181 ###### Recommended dictionary. ###### 00:07:48.181 "\010\000\000\000" # Uses: 4 00:07:48.181 ###### End of recommended dictionary. ###### 00:07:48.181 Done 43 runs in 2 second(s) 00:07:48.440 12:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:07:48.440 12:29:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:48.440 12:29:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:48.440 12:29:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:48.440 12:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:48.440 12:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:48.441 12:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:48.441 12:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:48.441 12:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:48.441 12:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:48.441 12:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:48.441 12:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:07:48.441 12:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:07:48.441 12:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:48.441 12:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:48.441 12:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:48.441 12:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:48.441 12:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:48.441 12:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:07:48.441 [2024-11-15 12:29:28.607459] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:48.441 [2024-11-15 12:29:28.607528] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674469 ] 00:07:48.700 [2024-11-15 12:29:28.824975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.700 [2024-11-15 12:29:28.864940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.700 [2024-11-15 12:29:28.924460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.700 [2024-11-15 12:29:28.940685] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:48.700 INFO: Running with entropic power schedule (0xFF, 100). 00:07:48.700 INFO: Seed: 148206201 00:07:48.700 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:07:48.700 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:07:48.700 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:48.700 INFO: A corpus is not provided, starting from an empty corpus 00:07:48.700 #2 INITED exec/s: 0 rss: 66Mb 00:07:48.700 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:48.700 This may also happen if the target rejected all inputs we tried so far 00:07:48.700 [2024-11-15 12:29:28.995744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.700 [2024-11-15 12:29:28.995787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.700 [2024-11-15 12:29:28.995823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.700 [2024-11-15 12:29:28.995839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.700 [2024-11-15 12:29:28.995870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.700 [2024-11-15 12:29:28.995886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.219 NEW_FUNC[1/715]: 0x44e138 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:49.219 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:49.219 #9 NEW cov: 12226 ft: 12225 corp: 2/27b lim: 40 exec/s: 0 rss: 73Mb L: 26/26 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:49.219 [2024-11-15 12:29:29.346457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a002a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.219 [2024-11-15 12:29:29.346502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.219 #13 NEW cov: 12339 ft: 13204 corp: 3/42b lim: 40 exec/s: 0 rss: 74Mb L: 15/26 MS: 4 CMP-CopyPart-ChangeByte-CMP- DE: "\032\000\000\000"-"\000\000\000\000\000\000\000\000"- 00:07:49.219 [2024-11-15 12:29:29.406506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a002a00 cdw11:002e0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.219 [2024-11-15 12:29:29.406541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.219 #14 NEW cov: 12345 ft: 13501 corp: 4/57b lim: 40 exec/s: 0 rss: 74Mb L: 15/26 MS: 1 ChangeByte- 00:07:49.219 [2024-11-15 12:29:29.496738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a002a00 cdw11:002e0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.219 [2024-11-15 12:29:29.496770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.478 #15 NEW cov: 12430 ft: 13768 corp: 5/72b lim: 40 exec/s: 0 rss: 74Mb L: 15/26 MS: 1 ShuffleBytes- 00:07:49.478 [2024-11-15 12:29:29.586965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:001a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.478 [2024-11-15 12:29:29.586999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.478 #16 NEW cov: 12430 ft: 13817 corp: 6/81b lim: 40 exec/s: 0 rss: 74Mb L: 9/26 MS: 1 EraseBytes- 00:07:49.478 [2024-11-15 12:29:29.677387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a002a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.478 [2024-11-15 12:29:29.677418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.478 [2024-11-15 12:29:29.677469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:04000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.478 [2024-11-15 12:29:29.677486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.478 #17 NEW cov: 12430 ft: 14029 corp: 7/104b lim: 40 exec/s: 0 rss: 74Mb L: 23/26 MS: 1 CMP- DE: "\001\004\000\000\000\000\000\000"- 00:07:49.478 [2024-11-15 12:29:29.737489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a672a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.478 [2024-11-15 12:29:29.737526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.478 #18 NEW cov: 12430 ft: 14110 corp: 8/119b lim: 40 exec/s: 0 rss: 74Mb L: 15/26 MS: 1 ChangeByte- 00:07:49.478 [2024-11-15 12:29:29.787622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a002a00 cdw11:002e0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.478 [2024-11-15 12:29:29.787651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.478 [2024-11-15 12:29:29.787700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0000001a cdw11:006b000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.478 [2024-11-15 12:29:29.787716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.738 #19 NEW cov: 12430 ft: 14190 corp: 9/135b lim: 40 exec/s: 0 rss: 74Mb L: 16/26 MS: 1 InsertByte- 00:07:49.738 [2024-11-15 12:29:29.847767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.738 [2024-11-15 12:29:29.847799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.738 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:49.738 #28 NEW cov: 12447 ft: 14287 corp: 10/143b lim: 40 exec/s: 0 rss: 74Mb L: 8/26 MS: 4 InsertByte-EraseBytes-ChangeByte-InsertRepeatedBytes- 00:07:49.738 [2024-11-15 12:29:29.897938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.738 [2024-11-15 12:29:29.897972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.738 #29 NEW cov: 12447 ft: 14354 corp: 11/155b lim: 40 exec/s: 0 rss: 74Mb L: 12/26 MS: 1 PersAutoDict- DE: "\032\000\000\000"- 00:07:49.738 [2024-11-15 12:29:29.988158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a002a00 cdw11:002e0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.738 [2024-11-15 12:29:29.988190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.738 [2024-11-15 12:29:29.988240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0000001a cdw11:6b00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.738 [2024-11-15 12:29:29.988256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.738 #30 NEW cov: 12447 ft: 14407 corp: 12/171b lim: 40 exec/s: 30 rss: 74Mb L: 16/26 MS: 1 ShuffleBytes- 00:07:49.738 [2024-11-15 12:29:30.078615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a002a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.738 [2024-11-15 12:29:30.078662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.738 [2024-11-15 12:29:30.078699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00003f00 cdw11:002e0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.738 [2024-11-15 12:29:30.078717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.738 [2024-11-15 12:29:30.078748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0000001a cdw11:6b00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.738 [2024-11-15 12:29:30.078765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.998 #31 NEW cov: 12447 ft: 14429 corp: 13/195b lim: 40 exec/s: 31 rss: 74Mb L: 24/26 MS: 1 CrossOver- 00:07:49.998 [2024-11-15 12:29:30.168777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a002a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.998 [2024-11-15 12:29:30.168823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.998 [2024-11-15 12:29:30.168858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00003f00 cdw11:002e0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.998 [2024-11-15 12:29:30.168875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.998 [2024-11-15 12:29:30.168905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0000001a cdw11:6b00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.998 [2024-11-15 12:29:30.168921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.998 #32 NEW cov: 12447 ft: 14451 corp: 14/219b lim: 40 exec/s: 32 rss: 74Mb L: 24/26 MS: 1 ShuffleBytes- 00:07:49.998 [2024-11-15 12:29:30.258861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a672a00 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.998 [2024-11-15 12:29:30.258898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.998 #33 NEW cov: 12447 ft: 14500 corp: 15/234b lim: 40 exec/s: 33 rss: 74Mb L: 15/26 MS: 1 ChangeBinInt- 00:07:50.257 [2024-11-15 12:29:30.349110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a002a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.257 [2024-11-15 12:29:30.349143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.257 #34 NEW cov: 12447 ft: 14539 corp: 16/249b lim: 40 exec/s: 34 rss: 74Mb L: 15/26 MS: 1 CrossOver- 00:07:50.257 [2024-11-15 12:29:30.409393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a002a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.257 [2024-11-15 12:29:30.409424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.257 [2024-11-15 12:29:30.409460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00003f00 cdw11:002e0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.257 [2024-11-15 12:29:30.409477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.257 [2024-11-15 12:29:30.409508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0000001a cdw11:6b00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.257 [2024-11-15 12:29:30.409524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.257 #40 NEW cov: 12447 ft: 14619 corp: 17/279b lim: 40 exec/s: 40 rss: 74Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:50.257 [2024-11-15 12:29:30.499616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a002a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.258 [2024-11-15 12:29:30.499648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.258 [2024-11-15 12:29:30.499684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00003f00 cdw11:002e0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.258 [2024-11-15 12:29:30.499701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.258 [2024-11-15 12:29:30.499732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0000001a cdw11:6b00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.258 [2024-11-15 12:29:30.499754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.258 #41 NEW cov: 12447 ft: 14682 corp: 18/310b lim: 40 exec/s: 41 rss: 74Mb L: 31/31 MS: 1 InsertByte- 00:07:50.258 [2024-11-15 12:29:30.580525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a002a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.258 [2024-11-15 12:29:30.580551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.258 [2024-11-15 12:29:30.580609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00003f00 cdw11:002e0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.258 [2024-11-15 12:29:30.580624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.258 [2024-11-15 12:29:30.580683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0000001a cdw11:6b00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.258 [2024-11-15 12:29:30.580697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.517 #42 NEW cov: 12447 ft: 14757 corp: 19/341b lim: 40 exec/s: 42 rss: 74Mb L: 31/31 MS: 1 ShuffleBytes- 00:07:50.517 [2024-11-15 12:29:30.640460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:008f0000 cdw11:0000003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.517 [2024-11-15 12:29:30.640486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.517 #43 NEW cov: 12447 ft: 14881 corp: 20/349b lim: 40 exec/s: 43 rss: 74Mb L: 8/31 MS: 1 ChangeByte- 00:07:50.517 [2024-11-15 12:29:30.680572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a002a00 cdw11:2a00002e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.517 [2024-11-15 12:29:30.680608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.517 #44 NEW cov: 12447 ft: 14938 corp: 21/364b lim: 40 exec/s: 44 rss: 74Mb L: 15/31 MS: 1 CopyPart- 00:07:50.517 [2024-11-15 12:29:30.721021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a002a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.517 [2024-11-15 12:29:30.721046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.517 [2024-11-15 12:29:30.721105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00003f00 cdw11:002e0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.517 [2024-11-15 12:29:30.721120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.517 [2024-11-15 12:29:30.721178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0000001a cdw11:6b00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.517 [2024-11-15 12:29:30.721192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.517 [2024-11-15 12:29:30.721247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:672a0000 cdw11:00000800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.517 [2024-11-15 12:29:30.721261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.517 #45 NEW cov: 12447 ft: 15426 corp: 22/398b lim: 40 exec/s: 45 rss: 74Mb L: 34/34 MS: 1 CrossOver- 00:07:50.517 [2024-11-15 12:29:30.760793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.517 [2024-11-15 12:29:30.760822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.517 #46 NEW cov: 12447 ft: 15442 corp: 23/407b lim: 40 exec/s: 46 rss: 74Mb L: 9/34 MS: 1 EraseBytes- 00:07:50.517 [2024-11-15 12:29:30.821071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:008f0000 cdw11:001a6700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.517 [2024-11-15 12:29:30.821097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.517 [2024-11-15 12:29:30.821153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:002a0000 cdw11:00003f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.517 [2024-11-15 12:29:30.821168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.777 #47 NEW cov: 12447 ft: 15460 corp: 24/430b lim: 40 exec/s: 47 rss: 74Mb L: 23/34 MS: 1 CrossOver- 00:07:50.777 [2024-11-15 12:29:30.881127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a672a0a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.777 [2024-11-15 12:29:30.881152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.777 #48 NEW cov: 12454 ft: 15486 corp: 25/445b lim: 40 exec/s: 48 rss: 74Mb L: 15/34 MS: 1 CrossOver- 00:07:50.777 [2024-11-15 12:29:30.921500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a002a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.777 [2024-11-15 12:29:30.921525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.777 [2024-11-15 12:29:30.921580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.777 [2024-11-15 12:29:30.921594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.777 [2024-11-15 12:29:30.921652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0000001a cdw11:6b00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.777 [2024-11-15 12:29:30.921666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.777 #49 NEW cov: 12454 ft: 15500 corp: 26/469b lim: 40 exec/s: 49 rss: 74Mb L: 24/34 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:50.777 [2024-11-15 12:29:30.961464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a010400 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.777 [2024-11-15 12:29:30.961489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.777 [2024-11-15 12:29:30.961548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0000001a cdw11:6b00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.777 [2024-11-15 12:29:30.961561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.777 #50 NEW cov: 12454 ft: 15514 corp: 27/485b lim: 40 exec/s: 25 rss: 74Mb L: 16/34 MS: 1 PersAutoDict- DE: "\001\004\000\000\000\000\000\000"- 00:07:50.777 #50 DONE cov: 12454 ft: 15514 corp: 27/485b lim: 40 exec/s: 25 rss: 74Mb 00:07:50.777 ###### Recommended dictionary. ###### 00:07:50.777 "\032\000\000\000" # Uses: 1 00:07:50.777 "\000\000\000\000\000\000\000\000" # Uses: 1 00:07:50.777 "\001\004\000\000\000\000\000\000" # Uses: 1 00:07:50.777 ###### End of recommended dictionary. ###### 00:07:50.777 Done 50 runs in 2 second(s) 00:07:50.777 12:29:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:50.777 12:29:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:50.777 12:29:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:50.777 12:29:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:50.777 12:29:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:50.777 12:29:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:50.777 12:29:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:50.777 12:29:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:50.777 12:29:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:50.777 12:29:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:50.777 12:29:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:50.777 12:29:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:50.777 12:29:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:50.777 12:29:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:50.777 12:29:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:50.777 12:29:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:50.777 12:29:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:50.777 12:29:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:50.777 12:29:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:51.037 [2024-11-15 12:29:31.132392] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:51.037 [2024-11-15 12:29:31.132469] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674813 ] 00:07:51.037 [2024-11-15 12:29:31.353020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.296 [2024-11-15 12:29:31.393950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.296 [2024-11-15 12:29:31.453524] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.296 [2024-11-15 12:29:31.469764] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:51.296 INFO: Running with entropic power schedule (0xFF, 100). 00:07:51.296 INFO: Seed: 2677233009 00:07:51.296 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:07:51.296 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:07:51.296 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:51.296 INFO: A corpus is not provided, starting from an empty corpus 00:07:51.296 #2 INITED exec/s: 0 rss: 66Mb 00:07:51.296 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:51.296 This may also happen if the target rejected all inputs we tried so far 00:07:51.296 [2024-11-15 12:29:31.525644] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.296 [2024-11-15 12:29:31.525676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.296 [2024-11-15 12:29:31.525748] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.296 [2024-11-15 12:29:31.525764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.296 [2024-11-15 12:29:31.525825] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.296 [2024-11-15 12:29:31.525840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.296 [2024-11-15 12:29:31.525897] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.296 [2024-11-15 12:29:31.525913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.556 NEW_FUNC[1/716]: 0x44fd08 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:51.556 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:51.556 #22 NEW cov: 12220 ft: 12219 corp: 2/33b lim: 35 exec/s: 0 rss: 74Mb L: 32/32 MS: 5 ChangeBit-ChangeBit-ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:07:51.556 [2024-11-15 12:29:31.856628] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.556 [2024-11-15 12:29:31.856669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.556 [2024-11-15 12:29:31.856747] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.556 [2024-11-15 12:29:31.856763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.556 [2024-11-15 12:29:31.856825] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.556 [2024-11-15 12:29:31.856841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.556 [2024-11-15 12:29:31.856898] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.556 [2024-11-15 12:29:31.856914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.815 #23 NEW cov: 12333 ft: 12853 corp: 3/65b lim: 35 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:51.815 [2024-11-15 12:29:31.916853] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.815 [2024-11-15 12:29:31.916882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.815 [2024-11-15 12:29:31.916962] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.815 [2024-11-15 12:29:31.916979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.815 [2024-11-15 12:29:31.917042] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.815 [2024-11-15 12:29:31.917058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.815 [2024-11-15 12:29:31.917116] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.815 [2024-11-15 12:29:31.917132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.815 [2024-11-15 12:29:31.917191] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.815 [2024-11-15 12:29:31.917211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:51.815 #24 NEW cov: 12339 ft: 13124 corp: 4/100b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:07:51.815 [2024-11-15 12:29:31.956986] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.815 [2024-11-15 12:29:31.957016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.815 [2024-11-15 12:29:31.957095] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.815 [2024-11-15 12:29:31.957112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.815 [2024-11-15 12:29:31.957175] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.815 [2024-11-15 12:29:31.957191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.815 [2024-11-15 12:29:31.957252] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.815 [2024-11-15 12:29:31.957268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.815 [2024-11-15 12:29:31.957332] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.815 [2024-11-15 12:29:31.957349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:51.815 #25 NEW cov: 12424 ft: 13384 corp: 5/135b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:07:51.815 [2024-11-15 12:29:32.016977] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.815 [2024-11-15 12:29:32.017004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.815 [2024-11-15 12:29:32.017067] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.816 [2024-11-15 12:29:32.017082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.816 [2024-11-15 12:29:32.017145] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.816 [2024-11-15 12:29:32.017160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.816 [2024-11-15 12:29:32.017220] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.816 [2024-11-15 12:29:32.017236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.816 #26 NEW cov: 12424 ft: 13507 corp: 6/167b lim: 35 exec/s: 0 rss: 74Mb L: 32/35 MS: 1 ShuffleBytes- 00:07:51.816 [2024-11-15 12:29:32.076823] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.816 [2024-11-15 12:29:32.076852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.816 [2024-11-15 12:29:32.076915] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.816 [2024-11-15 12:29:32.076932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.816 #32 NEW cov: 12424 ft: 13943 corp: 7/187b lim: 35 exec/s: 0 rss: 75Mb L: 20/35 MS: 1 EraseBytes- 00:07:51.816 [2024-11-15 12:29:32.137289] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.816 [2024-11-15 12:29:32.137322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.816 [2024-11-15 12:29:32.137398] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.816 [2024-11-15 12:29:32.137416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.816 [2024-11-15 12:29:32.137478] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.816 [2024-11-15 12:29:32.137494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.816 [2024-11-15 12:29:32.137555] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.816 [2024-11-15 12:29:32.137571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.076 #33 NEW cov: 12424 ft: 14093 corp: 8/219b lim: 35 exec/s: 0 rss: 75Mb L: 32/35 MS: 1 ChangeByte- 00:07:52.076 [2024-11-15 12:29:32.197455] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.076 [2024-11-15 12:29:32.197483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.076 [2024-11-15 12:29:32.197558] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.076 [2024-11-15 12:29:32.197575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.076 [2024-11-15 12:29:32.197635] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.076 [2024-11-15 12:29:32.197651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.076 [2024-11-15 12:29:32.197710] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.076 [2024-11-15 12:29:32.197726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.076 #34 NEW cov: 12424 ft: 14139 corp: 9/251b lim: 35 exec/s: 0 rss: 75Mb L: 32/35 MS: 1 ChangeASCIIInt- 00:07:52.076 [2024-11-15 12:29:32.257274] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.076 [2024-11-15 12:29:32.257302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.076 [2024-11-15 12:29:32.257366] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.076 [2024-11-15 12:29:32.257380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.076 #35 NEW cov: 12431 ft: 14260 corp: 10/271b lim: 35 exec/s: 0 rss: 75Mb L: 20/35 MS: 1 ChangeBinInt- 00:07:52.076 [2024-11-15 12:29:32.317814] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.076 [2024-11-15 12:29:32.317841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.076 [2024-11-15 12:29:32.317923] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.076 [2024-11-15 12:29:32.317943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.076 [2024-11-15 12:29:32.318002] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.076 [2024-11-15 12:29:32.318019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.076 [2024-11-15 12:29:32.318077] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.076 [2024-11-15 12:29:32.318093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.076 #36 NEW cov: 12431 ft: 14343 corp: 11/303b lim: 35 exec/s: 0 rss: 75Mb L: 32/35 MS: 1 ChangeByte- 00:07:52.076 [2024-11-15 12:29:32.358094] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.076 [2024-11-15 12:29:32.358123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.076 [2024-11-15 12:29:32.358199] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.076 [2024-11-15 12:29:32.358216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.076 [2024-11-15 12:29:32.358276] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.076 [2024-11-15 12:29:32.358293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.076 [2024-11-15 12:29:32.358326] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.076 [2024-11-15 12:29:32.358338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.076 [2024-11-15 12:29:32.358399] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.076 [2024-11-15 12:29:32.358414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:52.076 #37 NEW cov: 12431 ft: 14416 corp: 12/338b lim: 35 exec/s: 0 rss: 75Mb L: 35/35 MS: 1 ChangeBit- 00:07:52.076 [2024-11-15 12:29:32.397865] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.076 [2024-11-15 12:29:32.397892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.076 [2024-11-15 12:29:32.397953] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.076 [2024-11-15 12:29:32.397969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.076 [2024-11-15 12:29:32.398044] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.076 [2024-11-15 12:29:32.398060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.336 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:52.336 #38 NEW cov: 12454 ft: 14657 corp: 13/364b lim: 35 exec/s: 0 rss: 75Mb L: 26/35 MS: 1 EraseBytes- 00:07:52.336 [2024-11-15 12:29:32.458217] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.336 [2024-11-15 12:29:32.458248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.336 [2024-11-15 12:29:32.458331] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.336 [2024-11-15 12:29:32.458348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.336 [2024-11-15 12:29:32.458408] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.336 [2024-11-15 12:29:32.458424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.336 [2024-11-15 12:29:32.458484] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.336 [2024-11-15 12:29:32.458500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.336 #39 NEW cov: 12454 ft: 14726 corp: 14/397b lim: 35 exec/s: 0 rss: 75Mb L: 33/35 MS: 1 InsertByte- 00:07:52.336 [2024-11-15 12:29:32.498288] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.336 [2024-11-15 12:29:32.498319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.336 [2024-11-15 12:29:32.498382] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.336 [2024-11-15 12:29:32.498398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.336 [2024-11-15 12:29:32.498461] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.336 [2024-11-15 12:29:32.498476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.336 [2024-11-15 12:29:32.498538] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.336 [2024-11-15 12:29:32.498553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.336 #40 NEW cov: 12454 ft: 14787 corp: 15/429b lim: 35 exec/s: 40 rss: 75Mb L: 32/35 MS: 1 CrossOver- 00:07:52.336 [2024-11-15 12:29:32.538404] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.336 [2024-11-15 12:29:32.538432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.336 [2024-11-15 12:29:32.538507] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.336 [2024-11-15 12:29:32.538522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.336 [2024-11-15 12:29:32.538586] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.336 [2024-11-15 12:29:32.538602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.336 [2024-11-15 12:29:32.538663] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.336 [2024-11-15 12:29:32.538680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.336 #41 NEW cov: 12454 ft: 14829 corp: 16/462b lim: 35 exec/s: 41 rss: 75Mb L: 33/35 MS: 1 CrossOver- 00:07:52.336 [2024-11-15 12:29:32.578542] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.336 [2024-11-15 12:29:32.578571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.336 [2024-11-15 12:29:32.578634] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.336 [2024-11-15 12:29:32.578648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.336 [2024-11-15 12:29:32.578711] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.336 [2024-11-15 12:29:32.578725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.336 [2024-11-15 12:29:32.578786] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000014 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.336 [2024-11-15 12:29:32.578802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.336 #42 NEW cov: 12454 ft: 14859 corp: 17/490b lim: 35 exec/s: 42 rss: 75Mb L: 28/35 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:52.336 [2024-11-15 12:29:32.638717] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.336 [2024-11-15 12:29:32.638745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.336 [2024-11-15 12:29:32.638808] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.336 [2024-11-15 12:29:32.638824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.336 [2024-11-15 12:29:32.638884] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.336 [2024-11-15 12:29:32.638899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.336 [2024-11-15 12:29:32.638961] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000098 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.336 [2024-11-15 12:29:32.638977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.596 #43 NEW cov: 12454 ft: 14879 corp: 18/524b lim: 35 exec/s: 43 rss: 75Mb L: 34/35 MS: 1 InsertByte- 00:07:52.596 [2024-11-15 12:29:32.698901] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.698931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.596 [2024-11-15 12:29:32.699009] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.699028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.596 [2024-11-15 12:29:32.699089] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.699105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.596 [2024-11-15 12:29:32.699165] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.699182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.596 #44 NEW cov: 12454 ft: 14918 corp: 19/556b lim: 35 exec/s: 44 rss: 75Mb L: 32/35 MS: 1 ChangeByte- 00:07:52.596 [2024-11-15 12:29:32.739191] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.739219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.596 [2024-11-15 12:29:32.739281] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.739298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.596 [2024-11-15 12:29:32.739363] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.739377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.596 [2024-11-15 12:29:32.739437] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.739452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.596 [2024-11-15 12:29:32.739514] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.739530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:52.596 #45 NEW cov: 12454 ft: 14932 corp: 20/591b lim: 35 exec/s: 45 rss: 75Mb L: 35/35 MS: 1 CrossOver- 00:07:52.596 [2024-11-15 12:29:32.779096] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.779125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.596 [2024-11-15 12:29:32.779205] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.779221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.596 [2024-11-15 12:29:32.779284] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.779301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.596 [2024-11-15 12:29:32.779365] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.779382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.596 #46 NEW cov: 12454 ft: 14955 corp: 21/619b lim: 35 exec/s: 46 rss: 75Mb L: 28/35 MS: 1 EraseBytes- 00:07:52.596 [2024-11-15 12:29:32.819212] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.819241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.596 [2024-11-15 12:29:32.819304] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.819325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.596 [2024-11-15 12:29:32.819387] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.819406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.596 [2024-11-15 12:29:32.819468] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.819485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.596 #47 NEW cov: 12454 ft: 14985 corp: 22/651b lim: 35 exec/s: 47 rss: 75Mb L: 32/35 MS: 1 CopyPart- 00:07:52.596 [2024-11-15 12:29:32.859395] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.859424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.596 [2024-11-15 12:29:32.859487] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.859504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.596 [2024-11-15 12:29:32.859565] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.859581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.596 [2024-11-15 12:29:32.859655] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.859670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.596 #48 NEW cov: 12454 ft: 15024 corp: 23/684b lim: 35 exec/s: 48 rss: 75Mb L: 33/35 MS: 1 InsertByte- 00:07:52.596 [2024-11-15 12:29:32.899492] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.899522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.596 [2024-11-15 12:29:32.899584] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.899600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.596 [2024-11-15 12:29:32.899662] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.899679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.596 [2024-11-15 12:29:32.899741] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.596 [2024-11-15 12:29:32.899757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.856 #49 NEW cov: 12454 ft: 15114 corp: 24/716b lim: 35 exec/s: 49 rss: 75Mb L: 32/35 MS: 1 ChangeByte- 00:07:52.856 [2024-11-15 12:29:32.959657] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:32.959685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.856 [2024-11-15 12:29:32.959761] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:32.959789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.856 [2024-11-15 12:29:32.959872] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:32.959888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.856 [2024-11-15 12:29:32.959952] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:32.959969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.856 #50 NEW cov: 12454 ft: 15124 corp: 25/748b lim: 35 exec/s: 50 rss: 75Mb L: 32/35 MS: 1 ChangeBit- 00:07:52.856 [2024-11-15 12:29:32.999750] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:32.999776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.856 [2024-11-15 12:29:32.999854] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:32.999871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.856 [2024-11-15 12:29:32.999931] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:32.999946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.856 [2024-11-15 12:29:33.000008] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:33.000025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.856 #51 NEW cov: 12454 ft: 15144 corp: 26/780b lim: 35 exec/s: 51 rss: 75Mb L: 32/35 MS: 1 CrossOver- 00:07:52.856 [2024-11-15 12:29:33.039724] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:33.039752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.856 [2024-11-15 12:29:33.039831] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:33.039849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.856 [2024-11-15 12:29:33.039912] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:33.039929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.856 #52 NEW cov: 12454 ft: 15161 corp: 27/804b lim: 35 exec/s: 52 rss: 75Mb L: 24/35 MS: 1 EraseBytes- 00:07:52.856 [2024-11-15 12:29:33.079665] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:33.079694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.856 [2024-11-15 12:29:33.079759] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:33.079776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.856 #53 NEW cov: 12454 ft: 15184 corp: 28/823b lim: 35 exec/s: 53 rss: 75Mb L: 19/35 MS: 1 EraseBytes- 00:07:52.856 [2024-11-15 12:29:33.140183] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST IDENTIFIER cid:4 cdw10:80000081 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:33.140210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.856 [2024-11-15 12:29:33.140291] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:33.140308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.856 [2024-11-15 12:29:33.140377] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:33.140394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.856 [2024-11-15 12:29:33.140456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:33.140472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.856 NEW_FUNC[1/1]: 0x475be8 in feat_host_identifier /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:372 00:07:52.856 #54 NEW cov: 12464 ft: 15201 corp: 29/856b lim: 35 exec/s: 54 rss: 75Mb L: 33/35 MS: 1 InsertByte- 00:07:52.856 [2024-11-15 12:29:33.180494] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:33.180521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.856 [2024-11-15 12:29:33.180582] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:33.180597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.856 [2024-11-15 12:29:33.180659] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:33.180675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.856 [2024-11-15 12:29:33.180735] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:33.180751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.856 [2024-11-15 12:29:33.180813] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.856 [2024-11-15 12:29:33.180829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:53.116 #55 NEW cov: 12464 ft: 15244 corp: 30/891b lim: 35 exec/s: 55 rss: 75Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:53.116 [2024-11-15 12:29:33.240487] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST IDENTIFIER cid:4 cdw10:80000081 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.116 [2024-11-15 12:29:33.240514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.116 [2024-11-15 12:29:33.240592] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.116 [2024-11-15 12:29:33.240610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.116 [2024-11-15 12:29:33.240668] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.116 [2024-11-15 12:29:33.240687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.116 [2024-11-15 12:29:33.240749] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.116 [2024-11-15 12:29:33.240765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.116 #56 NEW cov: 12464 ft: 15265 corp: 31/924b lim: 35 exec/s: 56 rss: 75Mb L: 33/35 MS: 1 ChangeByte- 00:07:53.116 [2024-11-15 12:29:33.300657] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.116 [2024-11-15 12:29:33.300685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.116 [2024-11-15 12:29:33.300762] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.116 [2024-11-15 12:29:33.300778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.116 [2024-11-15 12:29:33.300841] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.116 [2024-11-15 12:29:33.300855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.116 [2024-11-15 12:29:33.300917] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000098 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.116 [2024-11-15 12:29:33.300934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.116 #57 NEW cov: 12464 ft: 15300 corp: 32/958b lim: 35 exec/s: 57 rss: 76Mb L: 34/35 MS: 1 ChangeByte- 00:07:53.116 [2024-11-15 12:29:33.360885] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.116 [2024-11-15 12:29:33.360913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.116 [2024-11-15 12:29:33.360973] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.116 [2024-11-15 12:29:33.360989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.116 [2024-11-15 12:29:33.361050] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.116 [2024-11-15 12:29:33.361064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.116 [2024-11-15 12:29:33.361124] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.116 [2024-11-15 12:29:33.361141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.116 #58 NEW cov: 12464 ft: 15307 corp: 33/990b lim: 35 exec/s: 58 rss: 76Mb L: 32/35 MS: 1 ChangeBinInt- 00:07:53.116 [2024-11-15 12:29:33.421009] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.116 [2024-11-15 12:29:33.421036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.116 [2024-11-15 12:29:33.421112] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.116 [2024-11-15 12:29:33.421129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.116 [2024-11-15 12:29:33.421241] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.116 [2024-11-15 12:29:33.421259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.116 NEW_FUNC[1/2]: 0x46a6e8 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:07:53.116 NEW_FUNC[2/2]: 0x1378898 in nvmf_ctrlr_set_features_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1604 00:07:53.116 #59 NEW cov: 12521 ft: 15365 corp: 34/1022b lim: 35 exec/s: 59 rss: 76Mb L: 32/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:53.376 [2024-11-15 12:29:33.461094] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.376 [2024-11-15 12:29:33.461123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.376 [2024-11-15 12:29:33.461188] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.376 [2024-11-15 12:29:33.461205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.376 [2024-11-15 12:29:33.461267] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.376 [2024-11-15 12:29:33.461282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.376 [2024-11-15 12:29:33.461347] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.376 [2024-11-15 12:29:33.461365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.376 #60 NEW cov: 12521 ft: 15405 corp: 35/1054b lim: 35 exec/s: 60 rss: 76Mb L: 32/35 MS: 1 ChangeByte- 00:07:53.376 [2024-11-15 12:29:33.520884] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.376 [2024-11-15 12:29:33.520913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.376 [2024-11-15 12:29:33.520975] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.376 [2024-11-15 12:29:33.520991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.376 #61 NEW cov: 12521 ft: 15471 corp: 36/1073b lim: 35 exec/s: 30 rss: 76Mb L: 19/35 MS: 1 ChangeBit- 00:07:53.376 #61 DONE cov: 12521 ft: 15471 corp: 36/1073b lim: 35 exec/s: 30 rss: 76Mb 00:07:53.376 ###### Recommended dictionary. ###### 00:07:53.376 "\001\000\000\000\000\000\000\000" # Uses: 1 00:07:53.376 ###### End of recommended dictionary. ###### 00:07:53.376 Done 61 runs in 2 second(s) 00:07:53.376 12:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:53.376 12:29:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:53.376 12:29:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:53.376 12:29:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:53.376 12:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:53.376 12:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:53.376 12:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:53.376 12:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:53.376 12:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:53.376 12:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:53.376 12:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:53.376 12:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:53.376 12:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:53.376 12:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:53.376 12:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:53.376 12:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:53.376 12:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:53.376 12:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:53.376 12:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:53.376 [2024-11-15 12:29:33.713253] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:53.376 [2024-11-15 12:29:33.713330] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid675175 ] 00:07:53.635 [2024-11-15 12:29:33.927422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.635 [2024-11-15 12:29:33.965949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.894 [2024-11-15 12:29:34.025705] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.894 [2024-11-15 12:29:34.041929] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:53.894 INFO: Running with entropic power schedule (0xFF, 100). 00:07:53.894 INFO: Seed: 953253671 00:07:53.894 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:07:53.894 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:07:53.894 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:53.894 INFO: A corpus is not provided, starting from an empty corpus 00:07:53.894 #2 INITED exec/s: 0 rss: 66Mb 00:07:53.894 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:53.894 This may also happen if the target rejected all inputs we tried so far 00:07:53.894 [2024-11-15 12:29:34.120374] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.894 [2024-11-15 12:29:34.120421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.894 [2024-11-15 12:29:34.120539] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.894 [2024-11-15 12:29:34.120557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.894 [2024-11-15 12:29:34.120666] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.894 [2024-11-15 12:29:34.120684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.153 NEW_FUNC[1/715]: 0x451248 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:54.154 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:54.154 #25 NEW cov: 12190 ft: 12180 corp: 2/28b lim: 35 exec/s: 0 rss: 73Mb L: 27/27 MS: 3 InsertByte-ChangeByte-InsertRepeatedBytes- 00:07:54.154 [2024-11-15 12:29:34.471032] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.154 [2024-11-15 12:29:34.471078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.154 [2024-11-15 12:29:34.471189] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.154 [2024-11-15 12:29:34.471208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.154 [2024-11-15 12:29:34.471312] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.154 [2024-11-15 12:29:34.471336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.413 #31 NEW cov: 12320 ft: 12715 corp: 3/55b lim: 35 exec/s: 0 rss: 74Mb L: 27/27 MS: 1 ChangeBinInt- 00:07:54.413 [2024-11-15 12:29:34.541190] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.413 [2024-11-15 12:29:34.541221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.413 [2024-11-15 12:29:34.541338] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.413 [2024-11-15 12:29:34.541356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.413 [2024-11-15 12:29:34.541456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.413 [2024-11-15 12:29:34.541473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.413 #32 NEW cov: 12326 ft: 12828 corp: 4/82b lim: 35 exec/s: 0 rss: 74Mb L: 27/27 MS: 1 ChangeBit- 00:07:54.413 [2024-11-15 12:29:34.611308] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.413 [2024-11-15 12:29:34.611345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.413 [2024-11-15 12:29:34.611451] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.413 [2024-11-15 12:29:34.611468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.413 #33 NEW cov: 12411 ft: 13277 corp: 5/100b lim: 35 exec/s: 0 rss: 74Mb L: 18/27 MS: 1 EraseBytes- 00:07:54.413 [2024-11-15 12:29:34.662336] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.413 [2024-11-15 12:29:34.662364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.413 [2024-11-15 12:29:34.662470] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.413 [2024-11-15 12:29:34.662487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.413 [2024-11-15 12:29:34.662584] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.413 [2024-11-15 12:29:34.662599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.413 [2024-11-15 12:29:34.662693] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.413 [2024-11-15 12:29:34.662713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.413 [2024-11-15 12:29:34.662817] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.413 [2024-11-15 12:29:34.662832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.413 #34 NEW cov: 12411 ft: 13956 corp: 6/135b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:54.413 [2024-11-15 12:29:34.732609] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.413 [2024-11-15 12:29:34.732638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.413 [2024-11-15 12:29:34.732742] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.413 [2024-11-15 12:29:34.732758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.413 [2024-11-15 12:29:34.732858] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.413 [2024-11-15 12:29:34.732875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.413 [2024-11-15 12:29:34.732969] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.413 [2024-11-15 12:29:34.732983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.672 #35 NEW cov: 12411 ft: 14116 corp: 7/165b lim: 35 exec/s: 0 rss: 74Mb L: 30/35 MS: 1 CrossOver- 00:07:54.672 [2024-11-15 12:29:34.783304] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.672 [2024-11-15 12:29:34.783334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.672 [2024-11-15 12:29:34.783440] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.672 [2024-11-15 12:29:34.783456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.672 [2024-11-15 12:29:34.783547] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.672 [2024-11-15 12:29:34.783562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.672 [2024-11-15 12:29:34.783658] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.672 [2024-11-15 12:29:34.783675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.672 [2024-11-15 12:29:34.783777] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.672 [2024-11-15 12:29:34.783793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.672 #36 NEW cov: 12411 ft: 14195 corp: 8/200b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:54.672 [2024-11-15 12:29:34.852756] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.672 [2024-11-15 12:29:34.852784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.672 [2024-11-15 12:29:34.852896] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.672 [2024-11-15 12:29:34.852913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.672 #42 NEW cov: 12411 ft: 14234 corp: 9/218b lim: 35 exec/s: 0 rss: 74Mb L: 18/35 MS: 1 ChangeBit- 00:07:54.672 [2024-11-15 12:29:34.903040] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.672 [2024-11-15 12:29:34.903067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.672 [2024-11-15 12:29:34.903166] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.672 [2024-11-15 12:29:34.903182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.672 #43 NEW cov: 12411 ft: 14274 corp: 10/237b lim: 35 exec/s: 0 rss: 74Mb L: 19/35 MS: 1 InsertByte- 00:07:54.672 [2024-11-15 12:29:34.953623] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.672 [2024-11-15 12:29:34.953650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.672 [2024-11-15 12:29:34.953746] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.672 [2024-11-15 12:29:34.953762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.672 [2024-11-15 12:29:34.953863] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.672 [2024-11-15 12:29:34.953878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.672 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:54.672 #44 NEW cov: 12434 ft: 14471 corp: 11/264b lim: 35 exec/s: 0 rss: 74Mb L: 27/35 MS: 1 InsertRepeatedBytes- 00:07:54.931 [2024-11-15 12:29:35.024040] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.931 [2024-11-15 12:29:35.024069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.931 [2024-11-15 12:29:35.024167] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.932 [2024-11-15 12:29:35.024182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.932 [2024-11-15 12:29:35.024297] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.932 [2024-11-15 12:29:35.024312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.932 #45 NEW cov: 12434 ft: 14507 corp: 12/291b lim: 35 exec/s: 0 rss: 74Mb L: 27/35 MS: 1 ShuffleBytes- 00:07:54.932 [2024-11-15 12:29:35.094652] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.932 [2024-11-15 12:29:35.094680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.932 [2024-11-15 12:29:35.094778] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.932 [2024-11-15 12:29:35.094795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.932 [2024-11-15 12:29:35.094900] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.932 [2024-11-15 12:29:35.094920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.932 #46 NEW cov: 12434 ft: 14537 corp: 13/318b lim: 35 exec/s: 46 rss: 74Mb L: 27/35 MS: 1 ChangeBit- 00:07:54.932 [2024-11-15 12:29:35.165135] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.932 [2024-11-15 12:29:35.165162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.932 [2024-11-15 12:29:35.165259] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.932 [2024-11-15 12:29:35.165276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.932 [2024-11-15 12:29:35.165389] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.932 [2024-11-15 12:29:35.165405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.932 #47 NEW cov: 12434 ft: 14640 corp: 14/345b lim: 35 exec/s: 47 rss: 74Mb L: 27/35 MS: 1 ShuffleBytes- 00:07:54.932 [2024-11-15 12:29:35.236171] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.932 [2024-11-15 12:29:35.236199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.932 [2024-11-15 12:29:35.236306] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.932 [2024-11-15 12:29:35.236325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.932 [2024-11-15 12:29:35.236439] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.932 [2024-11-15 12:29:35.236456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.932 [2024-11-15 12:29:35.236564] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.932 [2024-11-15 12:29:35.236581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.932 [2024-11-15 12:29:35.236681] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.932 [2024-11-15 12:29:35.236696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.932 #48 NEW cov: 12434 ft: 14724 corp: 15/380b lim: 35 exec/s: 48 rss: 74Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:55.191 [2024-11-15 12:29:35.286288] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.191 [2024-11-15 12:29:35.286319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.191 [2024-11-15 12:29:35.286439] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.191 [2024-11-15 12:29:35.286457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.191 [2024-11-15 12:29:35.286560] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.191 [2024-11-15 12:29:35.286578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.191 [2024-11-15 12:29:35.286685] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.191 [2024-11-15 12:29:35.286702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.191 [2024-11-15 12:29:35.286799] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.191 [2024-11-15 12:29:35.286816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.191 #49 NEW cov: 12434 ft: 14794 corp: 16/415b lim: 35 exec/s: 49 rss: 75Mb L: 35/35 MS: 1 CrossOver- 00:07:55.191 [2024-11-15 12:29:35.356801] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.191 [2024-11-15 12:29:35.356831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.191 [2024-11-15 12:29:35.356934] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.191 [2024-11-15 12:29:35.356950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.191 [2024-11-15 12:29:35.357050] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.191 [2024-11-15 12:29:35.357073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.191 [2024-11-15 12:29:35.357177] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.191 [2024-11-15 12:29:35.357194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.191 [2024-11-15 12:29:35.357297] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.191 [2024-11-15 12:29:35.357313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.191 #50 NEW cov: 12434 ft: 14840 corp: 17/450b lim: 35 exec/s: 50 rss: 75Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:55.191 [2024-11-15 12:29:35.406969] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.191 [2024-11-15 12:29:35.406998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.191 [2024-11-15 12:29:35.407110] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.191 [2024-11-15 12:29:35.407126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.191 [2024-11-15 12:29:35.407229] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.191 [2024-11-15 12:29:35.407245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.191 [2024-11-15 12:29:35.407347] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.191 [2024-11-15 12:29:35.407365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.191 [2024-11-15 12:29:35.407457] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.191 [2024-11-15 12:29:35.407473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.191 #51 NEW cov: 12434 ft: 14866 corp: 18/485b lim: 35 exec/s: 51 rss: 75Mb L: 35/35 MS: 1 ChangeASCIIInt- 00:07:55.191 [2024-11-15 12:29:35.477182] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000004f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.191 [2024-11-15 12:29:35.477212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.191 [2024-11-15 12:29:35.477321] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.191 [2024-11-15 12:29:35.477338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.191 [2024-11-15 12:29:35.477437] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.191 [2024-11-15 12:29:35.477451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.191 [2024-11-15 12:29:35.477546] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.191 [2024-11-15 12:29:35.477562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.191 [2024-11-15 12:29:35.477662] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.191 [2024-11-15 12:29:35.477678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.191 #52 NEW cov: 12434 ft: 14924 corp: 19/520b lim: 35 exec/s: 52 rss: 75Mb L: 35/35 MS: 1 ChangeByte- 00:07:55.450 [2024-11-15 12:29:35.547163] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-15 12:29:35.547192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.450 [2024-11-15 12:29:35.547294] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-15 12:29:35.547310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.450 [2024-11-15 12:29:35.547405] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-15 12:29:35.547420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.450 #53 NEW cov: 12434 ft: 14973 corp: 20/541b lim: 35 exec/s: 53 rss: 75Mb L: 21/35 MS: 1 EraseBytes- 00:07:55.450 [2024-11-15 12:29:35.597596] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-15 12:29:35.597626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.450 [2024-11-15 12:29:35.597742] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-15 12:29:35.597759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.450 [2024-11-15 12:29:35.597854] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-15 12:29:35.597872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.450 #54 NEW cov: 12434 ft: 15029 corp: 21/568b lim: 35 exec/s: 54 rss: 75Mb L: 27/35 MS: 1 ChangeByte- 00:07:55.451 [2024-11-15 12:29:35.648051] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.451 [2024-11-15 12:29:35.648083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.451 [2024-11-15 12:29:35.648193] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.451 [2024-11-15 12:29:35.648209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.451 [2024-11-15 12:29:35.648307] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.451 [2024-11-15 12:29:35.648328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.451 #55 NEW cov: 12434 ft: 15090 corp: 22/595b lim: 35 exec/s: 55 rss: 75Mb L: 27/35 MS: 1 ShuffleBytes- 00:07:55.451 [2024-11-15 12:29:35.698548] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.451 [2024-11-15 12:29:35.698577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.451 [2024-11-15 12:29:35.698674] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.451 [2024-11-15 12:29:35.698690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.451 [2024-11-15 12:29:35.698786] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.451 [2024-11-15 12:29:35.698801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.451 [2024-11-15 12:29:35.698890] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.451 [2024-11-15 12:29:35.698907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.451 #56 NEW cov: 12434 ft: 15108 corp: 23/625b lim: 35 exec/s: 56 rss: 75Mb L: 30/35 MS: 1 ShuffleBytes- 00:07:55.451 [2024-11-15 12:29:35.768847] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.451 [2024-11-15 12:29:35.768875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.451 [2024-11-15 12:29:35.768978] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.451 [2024-11-15 12:29:35.768996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.451 [2024-11-15 12:29:35.769098] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.451 [2024-11-15 12:29:35.769113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.710 #57 NEW cov: 12434 ft: 15136 corp: 24/652b lim: 35 exec/s: 57 rss: 75Mb L: 27/35 MS: 1 CopyPart- 00:07:55.710 [2024-11-15 12:29:35.838677] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.710 [2024-11-15 12:29:35.838705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.710 [2024-11-15 12:29:35.838823] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.710 [2024-11-15 12:29:35.838841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.710 #58 NEW cov: 12434 ft: 15141 corp: 25/672b lim: 35 exec/s: 58 rss: 75Mb L: 20/35 MS: 1 EraseBytes- 00:07:55.710 [2024-11-15 12:29:35.889865] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.710 [2024-11-15 12:29:35.889893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.711 [2024-11-15 12:29:35.889988] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.711 [2024-11-15 12:29:35.890005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.711 [2024-11-15 12:29:35.890105] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.711 [2024-11-15 12:29:35.890121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.711 [2024-11-15 12:29:35.890217] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.711 [2024-11-15 12:29:35.890233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.711 #59 NEW cov: 12434 ft: 15148 corp: 26/703b lim: 35 exec/s: 59 rss: 75Mb L: 31/35 MS: 1 InsertByte- 00:07:55.711 [2024-11-15 12:29:35.939717] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.711 [2024-11-15 12:29:35.939746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.711 [2024-11-15 12:29:35.939853] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.711 [2024-11-15 12:29:35.939869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.711 [2024-11-15 12:29:35.939970] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.711 [2024-11-15 12:29:35.939986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.711 #60 NEW cov: 12434 ft: 15156 corp: 27/724b lim: 35 exec/s: 60 rss: 75Mb L: 21/35 MS: 1 ChangeByte- 00:07:55.711 [2024-11-15 12:29:36.010032] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.711 [2024-11-15 12:29:36.010061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.711 [2024-11-15 12:29:36.010172] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.711 [2024-11-15 12:29:36.010187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.711 [2024-11-15 12:29:36.010282] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.711 [2024-11-15 12:29:36.010299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.711 #61 NEW cov: 12434 ft: 15169 corp: 28/745b lim: 35 exec/s: 61 rss: 75Mb L: 21/35 MS: 1 CopyPart- 00:07:55.970 [2024-11-15 12:29:36.081145] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.971 [2024-11-15 12:29:36.081173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.971 [2024-11-15 12:29:36.081281] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.971 [2024-11-15 12:29:36.081298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.971 [2024-11-15 12:29:36.081408] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.971 [2024-11-15 12:29:36.081424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.971 [2024-11-15 12:29:36.081523] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.971 [2024-11-15 12:29:36.081540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.971 [2024-11-15 12:29:36.081638] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.971 [2024-11-15 12:29:36.081655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.971 #62 NEW cov: 12434 ft: 15178 corp: 29/780b lim: 35 exec/s: 31 rss: 75Mb L: 35/35 MS: 1 CopyPart- 00:07:55.971 #62 DONE cov: 12434 ft: 15178 corp: 29/780b lim: 35 exec/s: 31 rss: 75Mb 00:07:55.971 Done 62 runs in 2 second(s) 00:07:55.971 12:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:55.971 12:29:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:55.971 12:29:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:55.971 12:29:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:55.971 12:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:55.971 12:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:55.971 12:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:55.971 12:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:55.971 12:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:55.971 12:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:55.971 12:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:55.971 12:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:55.971 12:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:55.971 12:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:55.971 12:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:55.971 12:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:55.971 12:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:55.971 12:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:55.971 12:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:55.971 [2024-11-15 12:29:36.254788] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:55.971 [2024-11-15 12:29:36.254859] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid675529 ] 00:07:56.230 [2024-11-15 12:29:36.473269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.230 [2024-11-15 12:29:36.512599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.230 [2024-11-15 12:29:36.571888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.489 [2024-11-15 12:29:36.588089] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:56.489 INFO: Running with entropic power schedule (0xFF, 100). 00:07:56.489 INFO: Seed: 3500255021 00:07:56.489 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:07:56.489 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:07:56.489 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:56.489 INFO: A corpus is not provided, starting from an empty corpus 00:07:56.489 #2 INITED exec/s: 0 rss: 66Mb 00:07:56.489 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:56.489 This may also happen if the target rejected all inputs we tried so far 00:07:56.489 [2024-11-15 12:29:36.643019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.489 [2024-11-15 12:29:36.643057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.489 [2024-11-15 12:29:36.643108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.489 [2024-11-15 12:29:36.643126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.489 [2024-11-15 12:29:36.643156] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.489 [2024-11-15 12:29:36.643173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.748 NEW_FUNC[1/716]: 0x452708 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:56.748 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:56.748 #19 NEW cov: 12312 ft: 12307 corp: 2/71b lim: 105 exec/s: 0 rss: 73Mb L: 70/70 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:56.748 [2024-11-15 12:29:37.003989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.748 [2024-11-15 12:29:37.004034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.748 [2024-11-15 12:29:37.004086] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.748 [2024-11-15 12:29:37.004105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.748 [2024-11-15 12:29:37.004135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:11 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.748 [2024-11-15 12:29:37.004152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.748 #20 NEW cov: 12425 ft: 12892 corp: 3/150b lim: 105 exec/s: 0 rss: 74Mb L: 79/79 MS: 1 CrossOver- 00:07:57.007 [2024-11-15 12:29:37.094113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.007 [2024-11-15 12:29:37.094148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.007 [2024-11-15 12:29:37.094183] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.007 [2024-11-15 12:29:37.094201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.007 [2024-11-15 12:29:37.094233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.007 [2024-11-15 12:29:37.094255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.007 #26 NEW cov: 12431 ft: 13163 corp: 4/220b lim: 105 exec/s: 0 rss: 74Mb L: 70/79 MS: 1 ChangeByte- 00:07:57.007 [2024-11-15 12:29:37.154164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.007 [2024-11-15 12:29:37.154195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.007 [2024-11-15 12:29:37.154243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.007 [2024-11-15 12:29:37.154260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.007 [2024-11-15 12:29:37.154292] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.007 [2024-11-15 12:29:37.154308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.007 #27 NEW cov: 12516 ft: 13503 corp: 5/290b lim: 105 exec/s: 0 rss: 74Mb L: 70/79 MS: 1 CopyPart- 00:07:57.007 [2024-11-15 12:29:37.214353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.007 [2024-11-15 12:29:37.214384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.007 [2024-11-15 12:29:37.214417] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.007 [2024-11-15 12:29:37.214445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.007 [2024-11-15 12:29:37.214474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.007 [2024-11-15 12:29:37.214490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.007 #28 NEW cov: 12516 ft: 13658 corp: 6/360b lim: 105 exec/s: 0 rss: 74Mb L: 70/79 MS: 1 ShuffleBytes- 00:07:57.007 [2024-11-15 12:29:37.304587] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.008 [2024-11-15 12:29:37.304618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.008 [2024-11-15 12:29:37.304650] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.008 [2024-11-15 12:29:37.304668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.008 [2024-11-15 12:29:37.304700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.008 [2024-11-15 12:29:37.304716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.267 #29 NEW cov: 12516 ft: 13798 corp: 7/423b lim: 105 exec/s: 0 rss: 74Mb L: 63/79 MS: 1 EraseBytes- 00:07:57.267 [2024-11-15 12:29:37.394828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.267 [2024-11-15 12:29:37.394859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.267 [2024-11-15 12:29:37.394893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.267 [2024-11-15 12:29:37.394915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.267 [2024-11-15 12:29:37.394946] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.267 [2024-11-15 12:29:37.394963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.267 #30 NEW cov: 12516 ft: 13896 corp: 8/493b lim: 105 exec/s: 0 rss: 74Mb L: 70/79 MS: 1 ChangeByte- 00:07:57.267 [2024-11-15 12:29:37.444853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.267 [2024-11-15 12:29:37.444884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.267 #31 NEW cov: 12516 ft: 14406 corp: 9/528b lim: 105 exec/s: 0 rss: 74Mb L: 35/79 MS: 1 EraseBytes- 00:07:57.267 [2024-11-15 12:29:37.505139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.267 [2024-11-15 12:29:37.505169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.267 [2024-11-15 12:29:37.505217] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.267 [2024-11-15 12:29:37.505235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.267 [2024-11-15 12:29:37.505266] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.267 [2024-11-15 12:29:37.505282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.267 [2024-11-15 12:29:37.505311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.267 [2024-11-15 12:29:37.505335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.267 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:57.267 #32 NEW cov: 12533 ft: 14944 corp: 10/623b lim: 105 exec/s: 0 rss: 74Mb L: 95/95 MS: 1 InsertRepeatedBytes- 00:07:57.267 [2024-11-15 12:29:37.595416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.267 [2024-11-15 12:29:37.595445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.267 [2024-11-15 12:29:37.595492] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.267 [2024-11-15 12:29:37.595510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.267 [2024-11-15 12:29:37.595541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.267 [2024-11-15 12:29:37.595557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.267 [2024-11-15 12:29:37.595586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.267 [2024-11-15 12:29:37.595603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.526 #33 NEW cov: 12533 ft: 14996 corp: 11/720b lim: 105 exec/s: 33 rss: 74Mb L: 97/97 MS: 1 CopyPart- 00:07:57.526 [2024-11-15 12:29:37.655488] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.526 [2024-11-15 12:29:37.655522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.526 [2024-11-15 12:29:37.655571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.526 [2024-11-15 12:29:37.655588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.526 [2024-11-15 12:29:37.655619] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:11 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.526 [2024-11-15 12:29:37.655636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.526 #34 NEW cov: 12533 ft: 15055 corp: 12/800b lim: 105 exec/s: 34 rss: 74Mb L: 80/97 MS: 1 InsertByte- 00:07:57.526 [2024-11-15 12:29:37.745832] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.526 [2024-11-15 12:29:37.745862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.526 [2024-11-15 12:29:37.745894] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.526 [2024-11-15 12:29:37.745913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.526 [2024-11-15 12:29:37.745943] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:8193 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.526 [2024-11-15 12:29:37.745960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.526 [2024-11-15 12:29:37.745989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.526 [2024-11-15 12:29:37.746006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.526 #35 NEW cov: 12533 ft: 15136 corp: 13/897b lim: 105 exec/s: 35 rss: 74Mb L: 97/97 MS: 1 ChangeBit- 00:07:57.526 [2024-11-15 12:29:37.835995] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.526 [2024-11-15 12:29:37.836024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.526 [2024-11-15 12:29:37.836072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.526 [2024-11-15 12:29:37.836090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.526 [2024-11-15 12:29:37.836122] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:11 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.526 [2024-11-15 12:29:37.836138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.785 #36 NEW cov: 12533 ft: 15154 corp: 14/977b lim: 105 exec/s: 36 rss: 74Mb L: 80/97 MS: 1 InsertByte- 00:07:57.785 [2024-11-15 12:29:37.896108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.785 [2024-11-15 12:29:37.896137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.785 [2024-11-15 12:29:37.896184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.785 [2024-11-15 12:29:37.896202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.785 [2024-11-15 12:29:37.896238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:11 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.785 [2024-11-15 12:29:37.896255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.785 #37 NEW cov: 12533 ft: 15165 corp: 15/1057b lim: 105 exec/s: 37 rss: 74Mb L: 80/97 MS: 1 ShuffleBytes- 00:07:57.785 [2024-11-15 12:29:37.986499] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069582356480 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.785 [2024-11-15 12:29:37.986531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.785 [2024-11-15 12:29:37.986579] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.785 [2024-11-15 12:29:37.986599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.785 [2024-11-15 12:29:37.986632] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.785 [2024-11-15 12:29:37.986649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.785 [2024-11-15 12:29:37.986678] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.785 [2024-11-15 12:29:37.986695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.785 [2024-11-15 12:29:37.986725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:0 len:46337 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.785 [2024-11-15 12:29:37.986741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:57.785 #38 NEW cov: 12533 ft: 15213 corp: 16/1162b lim: 105 exec/s: 38 rss: 74Mb L: 105/105 MS: 1 InsertRepeatedBytes- 00:07:57.785 [2024-11-15 12:29:38.076731] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069582356480 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.785 [2024-11-15 12:29:38.076763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.785 [2024-11-15 12:29:38.076795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.785 [2024-11-15 12:29:38.076814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.785 [2024-11-15 12:29:38.076844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.785 [2024-11-15 12:29:38.076861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.785 [2024-11-15 12:29:38.076889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.785 [2024-11-15 12:29:38.076906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.785 [2024-11-15 12:29:38.076935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:0 len:46337 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.785 [2024-11-15 12:29:38.076951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:58.045 #39 NEW cov: 12533 ft: 15282 corp: 17/1267b lim: 105 exec/s: 39 rss: 74Mb L: 105/105 MS: 1 ChangeBinInt- 00:07:58.045 [2024-11-15 12:29:38.166913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.045 [2024-11-15 12:29:38.166951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.045 [2024-11-15 12:29:38.166986] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.045 [2024-11-15 12:29:38.167006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.045 [2024-11-15 12:29:38.167038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.045 [2024-11-15 12:29:38.167055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.045 #40 NEW cov: 12533 ft: 15329 corp: 18/1331b lim: 105 exec/s: 40 rss: 75Mb L: 64/105 MS: 1 CopyPart- 00:07:58.045 [2024-11-15 12:29:38.257038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.045 [2024-11-15 12:29:38.257068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.045 [2024-11-15 12:29:38.257118] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:11 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.045 [2024-11-15 12:29:38.257136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.045 #41 NEW cov: 12533 ft: 15615 corp: 19/1388b lim: 105 exec/s: 41 rss: 75Mb L: 57/105 MS: 1 CrossOver- 00:07:58.045 [2024-11-15 12:29:38.317197] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.045 [2024-11-15 12:29:38.317227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.045 [2024-11-15 12:29:38.317276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.045 [2024-11-15 12:29:38.317293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.045 [2024-11-15 12:29:38.317332] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.045 [2024-11-15 12:29:38.317349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.045 #42 NEW cov: 12533 ft: 15647 corp: 20/1458b lim: 105 exec/s: 42 rss: 75Mb L: 70/105 MS: 1 ChangeBit- 00:07:58.045 [2024-11-15 12:29:38.367360] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.045 [2024-11-15 12:29:38.367391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.045 [2024-11-15 12:29:38.367424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.045 [2024-11-15 12:29:38.367442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.045 [2024-11-15 12:29:38.367473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:11 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.045 [2024-11-15 12:29:38.367490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.305 #43 NEW cov: 12533 ft: 15667 corp: 21/1538b lim: 105 exec/s: 43 rss: 75Mb L: 80/105 MS: 1 ChangeByte- 00:07:58.305 [2024-11-15 12:29:38.457644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.305 [2024-11-15 12:29:38.457679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.305 [2024-11-15 12:29:38.457712] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.305 [2024-11-15 12:29:38.457730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.305 [2024-11-15 12:29:38.457761] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.305 [2024-11-15 12:29:38.457777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.305 #44 NEW cov: 12540 ft: 15745 corp: 22/1613b lim: 105 exec/s: 44 rss: 75Mb L: 75/105 MS: 1 InsertRepeatedBytes- 00:07:58.305 [2024-11-15 12:29:38.547870] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.305 [2024-11-15 12:29:38.547902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.305 [2024-11-15 12:29:38.547949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.305 [2024-11-15 12:29:38.547968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.305 [2024-11-15 12:29:38.548000] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.305 [2024-11-15 12:29:38.548017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.305 #45 NEW cov: 12540 ft: 15772 corp: 23/1677b lim: 105 exec/s: 45 rss: 75Mb L: 64/105 MS: 1 ChangeByte- 00:07:58.305 [2024-11-15 12:29:38.638083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.305 [2024-11-15 12:29:38.638113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.305 [2024-11-15 12:29:38.638160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.305 [2024-11-15 12:29:38.638178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.305 [2024-11-15 12:29:38.638209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.305 [2024-11-15 12:29:38.638225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.564 #46 NEW cov: 12540 ft: 15801 corp: 24/1752b lim: 105 exec/s: 23 rss: 75Mb L: 75/105 MS: 1 ChangeByte- 00:07:58.564 #46 DONE cov: 12540 ft: 15801 corp: 24/1752b lim: 105 exec/s: 23 rss: 75Mb 00:07:58.564 Done 46 runs in 2 second(s) 00:07:58.564 12:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:58.564 12:29:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:58.564 12:29:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:58.564 12:29:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:58.564 12:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:58.564 12:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:58.564 12:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:58.564 12:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:58.564 12:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:58.564 12:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:58.564 12:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:58.564 12:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:58.565 12:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:58.565 12:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:58.565 12:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:58.565 12:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:58.565 12:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:58.565 12:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:58.565 12:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:07:58.565 [2024-11-15 12:29:38.868508] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:58.565 [2024-11-15 12:29:38.868579] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid675888 ] 00:07:58.824 [2024-11-15 12:29:39.083459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.824 [2024-11-15 12:29:39.121681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.082 [2024-11-15 12:29:39.181065] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.082 [2024-11-15 12:29:39.197288] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:59.082 INFO: Running with entropic power schedule (0xFF, 100). 00:07:59.082 INFO: Seed: 1814292557 00:07:59.082 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:07:59.082 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:07:59.082 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:59.082 INFO: A corpus is not provided, starting from an empty corpus 00:07:59.082 #2 INITED exec/s: 0 rss: 66Mb 00:07:59.082 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:59.082 This may also happen if the target rejected all inputs we tried so far 00:07:59.082 [2024-11-15 12:29:39.252237] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.082 [2024-11-15 12:29:39.252276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.082 [2024-11-15 12:29:39.252323] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.082 [2024-11-15 12:29:39.252342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.342 NEW_FUNC[1/717]: 0x455a88 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:59.342 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:59.342 #3 NEW cov: 12333 ft: 12332 corp: 2/67b lim: 120 exec/s: 0 rss: 73Mb L: 66/66 MS: 1 InsertRepeatedBytes- 00:07:59.342 [2024-11-15 12:29:39.603130] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.342 [2024-11-15 12:29:39.603180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.342 [2024-11-15 12:29:39.603214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.342 [2024-11-15 12:29:39.603231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.342 [2024-11-15 12:29:39.603259] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.342 [2024-11-15 12:29:39.603274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.342 #4 NEW cov: 12446 ft: 13280 corp: 3/142b lim: 120 exec/s: 0 rss: 74Mb L: 75/75 MS: 1 CrossOver- 00:07:59.601 [2024-11-15 12:29:39.693278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.601 [2024-11-15 12:29:39.693322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.601 [2024-11-15 12:29:39.693358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.601 [2024-11-15 12:29:39.693377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.601 [2024-11-15 12:29:39.693407] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14106333702460080323 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.601 [2024-11-15 12:29:39.693423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.601 #10 NEW cov: 12452 ft: 13401 corp: 4/217b lim: 120 exec/s: 0 rss: 74Mb L: 75/75 MS: 1 CMP- DE: "\363_\320o\366\212A\000"- 00:07:59.601 [2024-11-15 12:29:39.783524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.601 [2024-11-15 12:29:39.783555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.601 [2024-11-15 12:29:39.783603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.601 [2024-11-15 12:29:39.783620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.601 [2024-11-15 12:29:39.783651] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.601 [2024-11-15 12:29:39.783668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.601 [2024-11-15 12:29:39.783696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.601 [2024-11-15 12:29:39.783713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.601 #11 NEW cov: 12537 ft: 14245 corp: 5/327b lim: 120 exec/s: 0 rss: 74Mb L: 110/110 MS: 1 CrossOver- 00:07:59.601 [2024-11-15 12:29:39.843611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.601 [2024-11-15 12:29:39.843641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.601 [2024-11-15 12:29:39.843679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.601 [2024-11-15 12:29:39.843696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.601 [2024-11-15 12:29:39.843727] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14106333702460080323 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.601 [2024-11-15 12:29:39.843744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.602 #12 NEW cov: 12537 ft: 14317 corp: 6/402b lim: 120 exec/s: 0 rss: 74Mb L: 75/110 MS: 1 ShuffleBytes- 00:07:59.602 [2024-11-15 12:29:39.933919] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.602 [2024-11-15 12:29:39.933949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.602 [2024-11-15 12:29:39.933996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.602 [2024-11-15 12:29:39.934015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.602 [2024-11-15 12:29:39.934046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.602 [2024-11-15 12:29:39.934062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.602 [2024-11-15 12:29:39.934091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.602 [2024-11-15 12:29:39.934108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.861 #13 NEW cov: 12537 ft: 14472 corp: 7/515b lim: 120 exec/s: 0 rss: 74Mb L: 113/113 MS: 1 CopyPart- 00:07:59.861 [2024-11-15 12:29:40.024233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.861 [2024-11-15 12:29:40.024270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.861 [2024-11-15 12:29:40.024325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.861 [2024-11-15 12:29:40.024345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.861 [2024-11-15 12:29:40.024376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.861 [2024-11-15 12:29:40.024394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.861 [2024-11-15 12:29:40.024423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.861 [2024-11-15 12:29:40.024440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.861 #14 NEW cov: 12537 ft: 14525 corp: 8/628b lim: 120 exec/s: 0 rss: 74Mb L: 113/113 MS: 1 ChangeBinInt- 00:07:59.861 [2024-11-15 12:29:40.124501] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.861 [2024-11-15 12:29:40.124543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.861 [2024-11-15 12:29:40.124582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.861 [2024-11-15 12:29:40.124600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.861 [2024-11-15 12:29:40.124629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.861 [2024-11-15 12:29:40.124646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.861 [2024-11-15 12:29:40.124675] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.861 [2024-11-15 12:29:40.124691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.861 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:59.861 #15 NEW cov: 12554 ft: 14578 corp: 9/742b lim: 120 exec/s: 0 rss: 74Mb L: 114/114 MS: 1 InsertByte- 00:08:00.120 [2024-11-15 12:29:40.214653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.120 [2024-11-15 12:29:40.214686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.120 [2024-11-15 12:29:40.214735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.120 [2024-11-15 12:29:40.214753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.120 [2024-11-15 12:29:40.214783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.120 [2024-11-15 12:29:40.214799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.120 [2024-11-15 12:29:40.214828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.120 [2024-11-15 12:29:40.214845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.120 #16 NEW cov: 12554 ft: 14659 corp: 10/855b lim: 120 exec/s: 16 rss: 74Mb L: 113/114 MS: 1 CrossOver- 00:08:00.120 [2024-11-15 12:29:40.274784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.120 [2024-11-15 12:29:40.274814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.120 [2024-11-15 12:29:40.274861] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.120 [2024-11-15 12:29:40.274879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.120 [2024-11-15 12:29:40.274910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.120 [2024-11-15 12:29:40.274926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.120 [2024-11-15 12:29:40.274955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.120 [2024-11-15 12:29:40.274971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.120 #17 NEW cov: 12554 ft: 14731 corp: 11/968b lim: 120 exec/s: 17 rss: 74Mb L: 113/114 MS: 1 ChangeBit- 00:08:00.120 [2024-11-15 12:29:40.324874] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.120 [2024-11-15 12:29:40.324905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.120 [2024-11-15 12:29:40.324938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.120 [2024-11-15 12:29:40.324956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.120 [2024-11-15 12:29:40.324986] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.120 [2024-11-15 12:29:40.325002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.120 #18 NEW cov: 12554 ft: 14756 corp: 12/1044b lim: 120 exec/s: 18 rss: 74Mb L: 76/114 MS: 1 CrossOver- 00:08:00.120 [2024-11-15 12:29:40.385041] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.120 [2024-11-15 12:29:40.385070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.120 [2024-11-15 12:29:40.385118] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.120 [2024-11-15 12:29:40.385135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.120 [2024-11-15 12:29:40.385165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.120 [2024-11-15 12:29:40.385182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.120 [2024-11-15 12:29:40.385210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.120 [2024-11-15 12:29:40.385226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.120 #19 NEW cov: 12554 ft: 14780 corp: 13/1158b lim: 120 exec/s: 19 rss: 74Mb L: 114/114 MS: 1 ChangeByte- 00:08:00.380 [2024-11-15 12:29:40.475351] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.380 [2024-11-15 12:29:40.475382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.380 [2024-11-15 12:29:40.475413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.380 [2024-11-15 12:29:40.475431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.380 [2024-11-15 12:29:40.475462] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14106333702460080323 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.380 [2024-11-15 12:29:40.475478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.380 [2024-11-15 12:29:40.475507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.380 [2024-11-15 12:29:40.475523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.380 #20 NEW cov: 12554 ft: 14842 corp: 14/1265b lim: 120 exec/s: 20 rss: 74Mb L: 107/114 MS: 1 InsertRepeatedBytes- 00:08:00.380 [2024-11-15 12:29:40.535429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.380 [2024-11-15 12:29:40.535460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.380 [2024-11-15 12:29:40.535492] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.380 [2024-11-15 12:29:40.535510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.380 [2024-11-15 12:29:40.535541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14106333702460080323 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.380 [2024-11-15 12:29:40.535557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.380 #21 NEW cov: 12554 ft: 14921 corp: 15/1340b lim: 120 exec/s: 21 rss: 74Mb L: 75/114 MS: 1 ChangeByte- 00:08:00.380 [2024-11-15 12:29:40.625746] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.380 [2024-11-15 12:29:40.625777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.380 [2024-11-15 12:29:40.625824] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.380 [2024-11-15 12:29:40.625842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.380 [2024-11-15 12:29:40.625872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.380 [2024-11-15 12:29:40.625888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.380 [2024-11-15 12:29:40.625917] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.380 [2024-11-15 12:29:40.625933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.380 #22 NEW cov: 12554 ft: 14952 corp: 16/1454b lim: 120 exec/s: 22 rss: 74Mb L: 114/114 MS: 1 InsertByte- 00:08:00.380 [2024-11-15 12:29:40.715915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.380 [2024-11-15 12:29:40.715945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.380 [2024-11-15 12:29:40.715993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.380 [2024-11-15 12:29:40.716011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.380 [2024-11-15 12:29:40.716041] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.380 [2024-11-15 12:29:40.716058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.639 #23 NEW cov: 12554 ft: 14968 corp: 17/1530b lim: 120 exec/s: 23 rss: 74Mb L: 76/114 MS: 1 EraseBytes- 00:08:00.639 [2024-11-15 12:29:40.775895] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.639 [2024-11-15 12:29:40.775933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.639 #24 NEW cov: 12554 ft: 15790 corp: 18/1559b lim: 120 exec/s: 24 rss: 74Mb L: 29/114 MS: 1 CrossOver- 00:08:00.639 [2024-11-15 12:29:40.836204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.639 [2024-11-15 12:29:40.836236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.639 [2024-11-15 12:29:40.836269] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.639 [2024-11-15 12:29:40.836287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.639 [2024-11-15 12:29:40.836325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.639 [2024-11-15 12:29:40.836358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.639 #25 NEW cov: 12554 ft: 15809 corp: 19/1652b lim: 120 exec/s: 25 rss: 74Mb L: 93/114 MS: 1 EraseBytes- 00:08:00.639 [2024-11-15 12:29:40.896252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.639 [2024-11-15 12:29:40.896284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.639 #26 NEW cov: 12554 ft: 15839 corp: 20/1697b lim: 120 exec/s: 26 rss: 75Mb L: 45/114 MS: 1 EraseBytes- 00:08:00.898 [2024-11-15 12:29:40.986753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.898 [2024-11-15 12:29:40.986786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.898 [2024-11-15 12:29:40.986821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.898 [2024-11-15 12:29:40.986840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.899 [2024-11-15 12:29:40.986871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.899 [2024-11-15 12:29:40.986888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.899 [2024-11-15 12:29:40.986918] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.899 [2024-11-15 12:29:40.986934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.899 #27 NEW cov: 12554 ft: 15857 corp: 21/1794b lim: 120 exec/s: 27 rss: 75Mb L: 97/114 MS: 1 CrossOver- 00:08:00.899 [2024-11-15 12:29:41.046845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333701193581507 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.899 [2024-11-15 12:29:41.046879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.899 [2024-11-15 12:29:41.046913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.899 [2024-11-15 12:29:41.046932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.899 [2024-11-15 12:29:41.046968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.899 [2024-11-15 12:29:41.046985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.899 #32 NEW cov: 12554 ft: 15883 corp: 22/1877b lim: 120 exec/s: 32 rss: 75Mb L: 83/114 MS: 5 CopyPart-ChangeBit-ChangeByte-ChangeBit-CrossOver- 00:08:00.899 [2024-11-15 12:29:41.106993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.899 [2024-11-15 12:29:41.107023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.899 [2024-11-15 12:29:41.107056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.899 [2024-11-15 12:29:41.107074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.899 [2024-11-15 12:29:41.107104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.899 [2024-11-15 12:29:41.107121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.899 [2024-11-15 12:29:41.107149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.899 [2024-11-15 12:29:41.107166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.899 #33 NEW cov: 12561 ft: 15901 corp: 23/1990b lim: 120 exec/s: 33 rss: 75Mb L: 113/114 MS: 1 ChangeByte- 00:08:00.899 [2024-11-15 12:29:41.167109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.899 [2024-11-15 12:29:41.167139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.899 [2024-11-15 12:29:41.167184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.899 [2024-11-15 12:29:41.167203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.899 [2024-11-15 12:29:41.167233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.899 [2024-11-15 12:29:41.167250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.899 [2024-11-15 12:29:41.167278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14106333703424951235 len:50116 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.899 [2024-11-15 12:29:41.167294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.899 #34 NEW cov: 12561 ft: 15915 corp: 24/2104b lim: 120 exec/s: 17 rss: 75Mb L: 114/114 MS: 1 CopyPart- 00:08:00.899 #34 DONE cov: 12561 ft: 15915 corp: 24/2104b lim: 120 exec/s: 17 rss: 75Mb 00:08:00.899 ###### Recommended dictionary. ###### 00:08:00.899 "\363_\320o\366\212A\000" # Uses: 0 00:08:00.899 ###### End of recommended dictionary. ###### 00:08:00.899 Done 34 runs in 2 second(s) 00:08:01.158 12:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:08:01.158 12:29:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:01.158 12:29:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:01.158 12:29:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:08:01.158 12:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:08:01.158 12:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:01.158 12:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:01.158 12:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:01.158 12:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:08:01.158 12:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:01.158 12:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:01.158 12:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:08:01.158 12:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:08:01.158 12:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:01.158 12:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:08:01.158 12:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:01.158 12:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:01.158 12:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:01.158 12:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:08:01.158 [2024-11-15 12:29:41.399516] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:01.159 [2024-11-15 12:29:41.399589] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid676242 ] 00:08:01.478 [2024-11-15 12:29:41.616024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.478 [2024-11-15 12:29:41.655482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.478 [2024-11-15 12:29:41.714854] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.478 [2024-11-15 12:29:41.731086] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:08:01.478 INFO: Running with entropic power schedule (0xFF, 100). 00:08:01.478 INFO: Seed: 51303062 00:08:01.478 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:08:01.478 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:08:01.478 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:01.478 INFO: A corpus is not provided, starting from an empty corpus 00:08:01.478 #2 INITED exec/s: 0 rss: 66Mb 00:08:01.478 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:01.478 This may also happen if the target rejected all inputs we tried so far 00:08:01.478 [2024-11-15 12:29:41.780057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:01.478 [2024-11-15 12:29:41.780089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.478 [2024-11-15 12:29:41.780127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:01.478 [2024-11-15 12:29:41.780143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.478 [2024-11-15 12:29:41.780197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:01.478 [2024-11-15 12:29:41.780217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.478 [2024-11-15 12:29:41.780272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:01.478 [2024-11-15 12:29:41.780285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.791 NEW_FUNC[1/710]: 0x459378 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:08:01.791 NEW_FUNC[2/710]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:01.791 #25 NEW cov: 12232 ft: 12230 corp: 2/84b lim: 100 exec/s: 0 rss: 73Mb L: 83/83 MS: 3 ChangeByte-CrossOver-InsertRepeatedBytes- 00:08:01.791 [2024-11-15 12:29:42.121831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:01.791 [2024-11-15 12:29:42.121880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.050 NEW_FUNC[1/5]: 0x194c618 in spdk_nvme_qpair_process_completions /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:764 00:08:02.050 NEW_FUNC[2/5]: 0x19b6c98 in nvme_transport_qpair_process_completions /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_transport.c:659 00:08:02.050 #26 NEW cov: 12389 ft: 13400 corp: 3/104b lim: 100 exec/s: 0 rss: 73Mb L: 20/83 MS: 1 InsertRepeatedBytes- 00:08:02.050 [2024-11-15 12:29:42.182945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.050 [2024-11-15 12:29:42.182979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.050 [2024-11-15 12:29:42.183052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:02.050 [2024-11-15 12:29:42.183069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.050 [2024-11-15 12:29:42.183152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:02.050 [2024-11-15 12:29:42.183170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.050 [2024-11-15 12:29:42.183258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:02.050 [2024-11-15 12:29:42.183276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.050 #27 NEW cov: 12395 ft: 13727 corp: 4/188b lim: 100 exec/s: 0 rss: 73Mb L: 84/84 MS: 1 InsertByte- 00:08:02.050 [2024-11-15 12:29:42.263202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.050 [2024-11-15 12:29:42.263236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.050 [2024-11-15 12:29:42.263307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:02.050 [2024-11-15 12:29:42.263331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.050 [2024-11-15 12:29:42.263398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:02.050 [2024-11-15 12:29:42.263415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.050 [2024-11-15 12:29:42.263503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:02.050 [2024-11-15 12:29:42.263519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.050 #28 NEW cov: 12480 ft: 13947 corp: 5/272b lim: 100 exec/s: 0 rss: 73Mb L: 84/84 MS: 1 CopyPart- 00:08:02.050 [2024-11-15 12:29:42.333044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.050 [2024-11-15 12:29:42.333074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.050 [2024-11-15 12:29:42.333147] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:02.050 [2024-11-15 12:29:42.333163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.050 #39 NEW cov: 12480 ft: 14350 corp: 6/331b lim: 100 exec/s: 0 rss: 74Mb L: 59/84 MS: 1 EraseBytes- 00:08:02.309 [2024-11-15 12:29:42.402998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.309 [2024-11-15 12:29:42.403028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.309 #43 NEW cov: 12480 ft: 14419 corp: 7/353b lim: 100 exec/s: 0 rss: 74Mb L: 22/84 MS: 4 EraseBytes-CopyPart-InsertByte-CopyPart- 00:08:02.309 [2024-11-15 12:29:42.473279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.309 [2024-11-15 12:29:42.473309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.309 #44 NEW cov: 12480 ft: 14498 corp: 8/374b lim: 100 exec/s: 0 rss: 74Mb L: 21/84 MS: 1 CopyPart- 00:08:02.309 [2024-11-15 12:29:42.523494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.309 [2024-11-15 12:29:42.523522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.309 #53 NEW cov: 12480 ft: 14561 corp: 9/396b lim: 100 exec/s: 0 rss: 74Mb L: 22/84 MS: 4 EraseBytes-ChangeBinInt-ShuffleBytes-InsertRepeatedBytes- 00:08:02.309 [2024-11-15 12:29:42.573641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.309 [2024-11-15 12:29:42.573669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.309 #54 NEW cov: 12480 ft: 14650 corp: 10/418b lim: 100 exec/s: 0 rss: 74Mb L: 22/84 MS: 1 InsertByte- 00:08:02.309 [2024-11-15 12:29:42.644797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.309 [2024-11-15 12:29:42.644826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.309 [2024-11-15 12:29:42.644907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:02.309 [2024-11-15 12:29:42.644925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.309 [2024-11-15 12:29:42.645009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:02.309 [2024-11-15 12:29:42.645027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.309 [2024-11-15 12:29:42.645120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:02.309 [2024-11-15 12:29:42.645136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.568 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:02.568 #58 NEW cov: 12503 ft: 14705 corp: 11/510b lim: 100 exec/s: 0 rss: 74Mb L: 92/92 MS: 4 CopyPart-CopyPart-CrossOver-InsertRepeatedBytes- 00:08:02.568 [2024-11-15 12:29:42.694269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.568 [2024-11-15 12:29:42.694297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.568 #59 NEW cov: 12503 ft: 14771 corp: 12/533b lim: 100 exec/s: 0 rss: 74Mb L: 23/92 MS: 1 CrossOver- 00:08:02.568 [2024-11-15 12:29:42.765319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.568 [2024-11-15 12:29:42.765347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.568 [2024-11-15 12:29:42.765428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:02.568 [2024-11-15 12:29:42.765445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.568 [2024-11-15 12:29:42.765531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:02.568 [2024-11-15 12:29:42.765548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.568 [2024-11-15 12:29:42.765638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:02.568 [2024-11-15 12:29:42.765653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.568 #60 NEW cov: 12503 ft: 14777 corp: 13/616b lim: 100 exec/s: 60 rss: 74Mb L: 83/92 MS: 1 CopyPart- 00:08:02.568 [2024-11-15 12:29:42.815414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.568 [2024-11-15 12:29:42.815442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.568 [2024-11-15 12:29:42.815524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:02.568 [2024-11-15 12:29:42.815543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.568 [2024-11-15 12:29:42.815629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:02.568 [2024-11-15 12:29:42.815646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.568 [2024-11-15 12:29:42.815732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:02.568 [2024-11-15 12:29:42.815747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.568 #61 NEW cov: 12503 ft: 14826 corp: 14/700b lim: 100 exec/s: 61 rss: 74Mb L: 84/92 MS: 1 ShuffleBytes- 00:08:02.568 [2024-11-15 12:29:42.865737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.568 [2024-11-15 12:29:42.865765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.568 [2024-11-15 12:29:42.865840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:02.568 [2024-11-15 12:29:42.865857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.568 [2024-11-15 12:29:42.865942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:02.568 [2024-11-15 12:29:42.865959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.568 [2024-11-15 12:29:42.866042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:02.568 [2024-11-15 12:29:42.866058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.568 #64 NEW cov: 12503 ft: 14837 corp: 15/781b lim: 100 exec/s: 64 rss: 74Mb L: 81/92 MS: 3 ChangeByte-CopyPart-InsertRepeatedBytes- 00:08:02.827 [2024-11-15 12:29:42.915806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.827 [2024-11-15 12:29:42.915834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.827 [2024-11-15 12:29:42.915912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:02.827 [2024-11-15 12:29:42.915929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.827 [2024-11-15 12:29:42.916013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:02.827 [2024-11-15 12:29:42.916030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.827 #68 NEW cov: 12503 ft: 15101 corp: 16/854b lim: 100 exec/s: 68 rss: 74Mb L: 73/92 MS: 4 EraseBytes-EraseBytes-InsertByte-CrossOver- 00:08:02.827 [2024-11-15 12:29:42.985320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.827 [2024-11-15 12:29:42.985349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.827 #69 NEW cov: 12503 ft: 15130 corp: 17/890b lim: 100 exec/s: 69 rss: 74Mb L: 36/92 MS: 1 InsertRepeatedBytes- 00:08:02.827 [2024-11-15 12:29:43.035666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.827 [2024-11-15 12:29:43.035694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.827 #70 NEW cov: 12503 ft: 15144 corp: 18/912b lim: 100 exec/s: 70 rss: 74Mb L: 22/92 MS: 1 ChangeBinInt- 00:08:02.827 [2024-11-15 12:29:43.086344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.827 [2024-11-15 12:29:43.086371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.827 [2024-11-15 12:29:43.086441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:02.827 [2024-11-15 12:29:43.086459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.827 [2024-11-15 12:29:43.086540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:02.827 [2024-11-15 12:29:43.086559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.827 #71 NEW cov: 12503 ft: 15161 corp: 19/972b lim: 100 exec/s: 71 rss: 74Mb L: 60/92 MS: 1 InsertByte- 00:08:02.827 [2024-11-15 12:29:43.156978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.827 [2024-11-15 12:29:43.157006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.827 [2024-11-15 12:29:43.157085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:02.827 [2024-11-15 12:29:43.157101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.827 [2024-11-15 12:29:43.157186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:02.828 [2024-11-15 12:29:43.157201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.828 [2024-11-15 12:29:43.157290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:02.828 [2024-11-15 12:29:43.157306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.087 #72 NEW cov: 12503 ft: 15172 corp: 20/1062b lim: 100 exec/s: 72 rss: 74Mb L: 90/92 MS: 1 CopyPart- 00:08:03.087 [2024-11-15 12:29:43.206560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.087 [2024-11-15 12:29:43.206589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.087 #78 NEW cov: 12503 ft: 15182 corp: 21/1084b lim: 100 exec/s: 78 rss: 74Mb L: 22/92 MS: 1 ChangeBinInt- 00:08:03.087 [2024-11-15 12:29:43.276768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.087 [2024-11-15 12:29:43.276796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.087 #79 NEW cov: 12503 ft: 15185 corp: 22/1105b lim: 100 exec/s: 79 rss: 74Mb L: 21/92 MS: 1 ChangeBinInt- 00:08:03.087 [2024-11-15 12:29:43.327909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.087 [2024-11-15 12:29:43.327937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.087 [2024-11-15 12:29:43.328026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.087 [2024-11-15 12:29:43.328044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.087 [2024-11-15 12:29:43.328125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.087 [2024-11-15 12:29:43.328143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.087 [2024-11-15 12:29:43.328232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:03.087 [2024-11-15 12:29:43.328249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.087 #80 NEW cov: 12503 ft: 15191 corp: 23/1195b lim: 100 exec/s: 80 rss: 74Mb L: 90/92 MS: 1 CopyPart- 00:08:03.087 [2024-11-15 12:29:43.398155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.087 [2024-11-15 12:29:43.398184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.087 [2024-11-15 12:29:43.398266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.087 [2024-11-15 12:29:43.398285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.087 [2024-11-15 12:29:43.398376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.087 [2024-11-15 12:29:43.398391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.087 [2024-11-15 12:29:43.398476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:03.087 [2024-11-15 12:29:43.398493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.087 #81 NEW cov: 12503 ft: 15209 corp: 24/1285b lim: 100 exec/s: 81 rss: 74Mb L: 90/92 MS: 1 ChangeBinInt- 00:08:03.346 [2024-11-15 12:29:43.447564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.346 [2024-11-15 12:29:43.447595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.346 #82 NEW cov: 12503 ft: 15226 corp: 25/1307b lim: 100 exec/s: 82 rss: 74Mb L: 22/92 MS: 1 ChangeByte- 00:08:03.346 [2024-11-15 12:29:43.498590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.346 [2024-11-15 12:29:43.498618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.346 [2024-11-15 12:29:43.498703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.346 [2024-11-15 12:29:43.498720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.346 [2024-11-15 12:29:43.498809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.347 [2024-11-15 12:29:43.498827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.347 [2024-11-15 12:29:43.498925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:03.347 [2024-11-15 12:29:43.498942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.347 #83 NEW cov: 12503 ft: 15237 corp: 26/1397b lim: 100 exec/s: 83 rss: 74Mb L: 90/92 MS: 1 ShuffleBytes- 00:08:03.347 [2024-11-15 12:29:43.568642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.347 [2024-11-15 12:29:43.568674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.347 [2024-11-15 12:29:43.568755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.347 [2024-11-15 12:29:43.568772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.347 [2024-11-15 12:29:43.568851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.347 [2024-11-15 12:29:43.568870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.347 #84 NEW cov: 12503 ft: 15239 corp: 27/1459b lim: 100 exec/s: 84 rss: 74Mb L: 62/92 MS: 1 EraseBytes- 00:08:03.347 [2024-11-15 12:29:43.619018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.347 [2024-11-15 12:29:43.619050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.347 [2024-11-15 12:29:43.619123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.347 [2024-11-15 12:29:43.619141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.347 [2024-11-15 12:29:43.619227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.347 [2024-11-15 12:29:43.619246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.347 [2024-11-15 12:29:43.619338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:03.347 [2024-11-15 12:29:43.619356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.347 #85 NEW cov: 12503 ft: 15268 corp: 28/1544b lim: 100 exec/s: 85 rss: 74Mb L: 85/92 MS: 1 InsertByte- 00:08:03.347 [2024-11-15 12:29:43.669477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.347 [2024-11-15 12:29:43.669512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.347 [2024-11-15 12:29:43.669608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.347 [2024-11-15 12:29:43.669630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.347 [2024-11-15 12:29:43.669687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.347 [2024-11-15 12:29:43.669705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.347 [2024-11-15 12:29:43.669783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:03.347 [2024-11-15 12:29:43.669801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.606 #86 NEW cov: 12503 ft: 15304 corp: 29/1634b lim: 100 exec/s: 86 rss: 74Mb L: 90/92 MS: 1 CopyPart- 00:08:03.606 [2024-11-15 12:29:43.719178] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.606 [2024-11-15 12:29:43.719212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.606 [2024-11-15 12:29:43.719326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.606 [2024-11-15 12:29:43.719344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.606 #87 NEW cov: 12503 ft: 15363 corp: 30/1693b lim: 100 exec/s: 87 rss: 74Mb L: 59/92 MS: 1 ShuffleBytes- 00:08:03.606 [2024-11-15 12:29:43.769885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.606 [2024-11-15 12:29:43.769916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.606 [2024-11-15 12:29:43.769991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.606 [2024-11-15 12:29:43.770009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.606 [2024-11-15 12:29:43.770095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.606 [2024-11-15 12:29:43.770113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.606 [2024-11-15 12:29:43.770198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:03.606 [2024-11-15 12:29:43.770216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.606 #88 NEW cov: 12503 ft: 15381 corp: 31/1788b lim: 100 exec/s: 44 rss: 74Mb L: 95/95 MS: 1 InsertRepeatedBytes- 00:08:03.606 #88 DONE cov: 12503 ft: 15381 corp: 31/1788b lim: 100 exec/s: 44 rss: 74Mb 00:08:03.606 Done 88 runs in 2 second(s) 00:08:03.607 12:29:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:08:03.607 12:29:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:03.607 12:29:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:03.607 12:29:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:08:03.607 12:29:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:08:03.607 12:29:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:03.607 12:29:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:03.607 12:29:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:03.607 12:29:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:08:03.607 12:29:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:03.607 12:29:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:03.607 12:29:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:08:03.607 12:29:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:08:03.607 12:29:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:03.607 12:29:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:08:03.607 12:29:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:03.607 12:29:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:03.607 12:29:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:03.607 12:29:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:08:03.866 [2024-11-15 12:29:43.959679] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:03.866 [2024-11-15 12:29:43.959763] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid676606 ] 00:08:03.866 [2024-11-15 12:29:44.178027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.125 [2024-11-15 12:29:44.218370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.125 [2024-11-15 12:29:44.278095] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.125 [2024-11-15 12:29:44.294305] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:08:04.125 INFO: Running with entropic power schedule (0xFF, 100). 00:08:04.125 INFO: Seed: 2614311587 00:08:04.125 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:08:04.125 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:08:04.125 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:04.125 INFO: A corpus is not provided, starting from an empty corpus 00:08:04.125 #2 INITED exec/s: 0 rss: 65Mb 00:08:04.125 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:04.125 This may also happen if the target rejected all inputs we tried so far 00:08:04.125 [2024-11-15 12:29:44.342861] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574427651575685280 len:41121 00:08:04.125 [2024-11-15 12:29:44.342892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.125 [2024-11-15 12:29:44.342926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 00:08:04.125 [2024-11-15 12:29:44.342942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.125 [2024-11-15 12:29:44.342994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 00:08:04.125 [2024-11-15 12:29:44.343010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.384 NEW_FUNC[1/715]: 0x45c338 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:08:04.384 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:04.384 #4 NEW cov: 12254 ft: 12253 corp: 2/34b lim: 50 exec/s: 0 rss: 73Mb L: 33/33 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:04.384 [2024-11-15 12:29:44.673726] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574427651575685280 len:41121 00:08:04.384 [2024-11-15 12:29:44.673764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.384 [2024-11-15 12:29:44.673814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 00:08:04.384 [2024-11-15 12:29:44.673830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.384 [2024-11-15 12:29:44.673881] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 00:08:04.384 [2024-11-15 12:29:44.673897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.384 #5 NEW cov: 12367 ft: 12814 corp: 3/67b lim: 50 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeBit- 00:08:04.643 [2024-11-15 12:29:44.733845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11540369179393695904 len:41121 00:08:04.643 [2024-11-15 12:29:44.733880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.643 [2024-11-15 12:29:44.733922] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 00:08:04.643 [2024-11-15 12:29:44.733938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.643 [2024-11-15 12:29:44.733990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 00:08:04.643 [2024-11-15 12:29:44.734005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.643 #6 NEW cov: 12373 ft: 13082 corp: 4/100b lim: 50 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeByte- 00:08:04.643 [2024-11-15 12:29:44.773757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12008468690649130662 len:42663 00:08:04.643 [2024-11-15 12:29:44.773786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.643 [2024-11-15 12:29:44.773837] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12008468691120727718 len:42663 00:08:04.643 [2024-11-15 12:29:44.773854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.643 #10 NEW cov: 12458 ft: 13561 corp: 5/128b lim: 50 exec/s: 0 rss: 73Mb L: 28/33 MS: 4 ChangeBit-InsertByte-CopyPart-InsertRepeatedBytes- 00:08:04.643 [2024-11-15 12:29:44.814000] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574427651575685280 len:41121 00:08:04.643 [2024-11-15 12:29:44.814028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.643 [2024-11-15 12:29:44.814063] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574287328920772768 len:1 00:08:04.643 [2024-11-15 12:29:44.814078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.643 [2024-11-15 12:29:44.814128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 00:08:04.643 [2024-11-15 12:29:44.814143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.643 #11 NEW cov: 12458 ft: 13872 corp: 6/161b lim: 50 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeBinInt- 00:08:04.643 [2024-11-15 12:29:44.874169] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574427651575685280 len:41121 00:08:04.643 [2024-11-15 12:29:44.874196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.643 [2024-11-15 12:29:44.874235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574287328920772768 len:1 00:08:04.643 [2024-11-15 12:29:44.874250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.643 [2024-11-15 12:29:44.874301] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 00:08:04.643 [2024-11-15 12:29:44.874321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.643 #12 NEW cov: 12458 ft: 13913 corp: 7/194b lim: 50 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ShuffleBytes- 00:08:04.643 [2024-11-15 12:29:44.934328] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574427651575685280 len:41121 00:08:04.643 [2024-11-15 12:29:44.934360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.643 [2024-11-15 12:29:44.934398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41064 00:08:04.643 [2024-11-15 12:29:44.934414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.643 [2024-11-15 12:29:44.934466] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574427652997472160 len:41121 00:08:04.643 [2024-11-15 12:29:44.934482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.643 #13 NEW cov: 12458 ft: 14022 corp: 8/227b lim: 50 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeBinInt- 00:08:04.643 [2024-11-15 12:29:44.974438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574427651575685280 len:41121 00:08:04.643 [2024-11-15 12:29:44.974466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.643 [2024-11-15 12:29:44.974511] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 00:08:04.643 [2024-11-15 12:29:44.974527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.643 [2024-11-15 12:29:44.974578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 00:08:04.643 [2024-11-15 12:29:44.974594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.902 #14 NEW cov: 12458 ft: 14078 corp: 9/260b lim: 50 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeBinInt- 00:08:04.903 [2024-11-15 12:29:45.014561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574427651575685280 len:41121 00:08:04.903 [2024-11-15 12:29:45.014589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.903 [2024-11-15 12:29:45.014648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574287328920772768 len:1 00:08:04.903 [2024-11-15 12:29:45.014664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.903 [2024-11-15 12:29:45.014719] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574392469720178848 len:41121 00:08:04.903 [2024-11-15 12:29:45.014734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.903 #15 NEW cov: 12458 ft: 14113 corp: 10/293b lim: 50 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeBit- 00:08:04.903 [2024-11-15 12:29:45.074638] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11540369179393695904 len:41121 00:08:04.903 [2024-11-15 12:29:45.074665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.903 [2024-11-15 12:29:45.074700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 00:08:04.903 [2024-11-15 12:29:45.074716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.903 #16 NEW cov: 12458 ft: 14166 corp: 11/317b lim: 50 exec/s: 0 rss: 74Mb L: 24/33 MS: 1 EraseBytes- 00:08:04.903 [2024-11-15 12:29:45.134905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574427651575685280 len:41121 00:08:04.903 [2024-11-15 12:29:45.134931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.903 [2024-11-15 12:29:45.134985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574287328920772768 len:1 00:08:04.903 [2024-11-15 12:29:45.135001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.903 [2024-11-15 12:29:45.135053] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 00:08:04.903 [2024-11-15 12:29:45.135069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.903 #17 NEW cov: 12458 ft: 14184 corp: 12/350b lim: 50 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 ChangeBit- 00:08:04.903 [2024-11-15 12:29:45.174860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574427651575685280 len:41121 00:08:04.903 [2024-11-15 12:29:45.174887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.903 [2024-11-15 12:29:45.174939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:45212608015433728 len:41121 00:08:04.903 [2024-11-15 12:29:45.174956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.903 #18 NEW cov: 12458 ft: 14202 corp: 13/378b lim: 50 exec/s: 0 rss: 74Mb L: 28/33 MS: 1 EraseBytes- 00:08:04.903 [2024-11-15 12:29:45.235293] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574427651575685280 len:41121 00:08:04.903 [2024-11-15 12:29:45.235324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.903 [2024-11-15 12:29:45.235394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574287328920772768 len:1 00:08:04.903 [2024-11-15 12:29:45.235409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.903 [2024-11-15 12:29:45.235460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574392469720178848 len:41121 00:08:04.903 [2024-11-15 12:29:45.235475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.903 [2024-11-15 12:29:45.235527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:7275998688541752282 len:161 00:08:04.903 [2024-11-15 12:29:45.235541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.162 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:05.162 #19 NEW cov: 12481 ft: 14492 corp: 14/419b lim: 50 exec/s: 0 rss: 74Mb L: 41/41 MS: 1 CMP- DE: "\275;\332d\371\212A\000"- 00:08:05.162 [2024-11-15 12:29:45.295382] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574427651575685280 len:41121 00:08:05.162 [2024-11-15 12:29:45.295411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.162 [2024-11-15 12:29:45.295446] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 00:08:05.162 [2024-11-15 12:29:45.295462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.162 [2024-11-15 12:29:45.295514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574427652481654944 len:41121 00:08:05.162 [2024-11-15 12:29:45.295530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.162 #20 NEW cov: 12481 ft: 14500 corp: 15/452b lim: 50 exec/s: 0 rss: 74Mb L: 33/41 MS: 1 ChangeByte- 00:08:05.162 [2024-11-15 12:29:45.335309] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574287326404190368 len:1 00:08:05.162 [2024-11-15 12:29:45.335342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.162 [2024-11-15 12:29:45.335392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 00:08:05.162 [2024-11-15 12:29:45.335408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.162 #21 NEW cov: 12481 ft: 14598 corp: 16/475b lim: 50 exec/s: 21 rss: 74Mb L: 23/41 MS: 1 EraseBytes- 00:08:05.162 [2024-11-15 12:29:45.395385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:176611750060288 len:41121 00:08:05.162 [2024-11-15 12:29:45.395413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.162 #22 NEW cov: 12481 ft: 14962 corp: 17/494b lim: 50 exec/s: 22 rss: 74Mb L: 19/41 MS: 1 EraseBytes- 00:08:05.162 [2024-11-15 12:29:45.435707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13635732422150235814 len:63883 00:08:05.162 [2024-11-15 12:29:45.435734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.162 [2024-11-15 12:29:45.435795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12008468689415349926 len:42663 00:08:05.162 [2024-11-15 12:29:45.435811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.162 [2024-11-15 12:29:45.435863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:12008468691120727718 len:42663 00:08:05.162 [2024-11-15 12:29:45.435879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.162 #23 NEW cov: 12481 ft: 15001 corp: 18/530b lim: 50 exec/s: 23 rss: 74Mb L: 36/41 MS: 1 PersAutoDict- DE: "\275;\332d\371\212A\000"- 00:08:05.162 [2024-11-15 12:29:45.495773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11540369179393695904 len:41121 00:08:05.162 [2024-11-15 12:29:45.495801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.163 [2024-11-15 12:29:45.495835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41054 00:08:05.163 [2024-11-15 12:29:45.495850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.422 #24 NEW cov: 12481 ft: 15010 corp: 19/555b lim: 50 exec/s: 24 rss: 74Mb L: 25/41 MS: 1 InsertByte- 00:08:05.422 [2024-11-15 12:29:45.556019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574427651575685280 len:41121 00:08:05.422 [2024-11-15 12:29:45.556046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.422 [2024-11-15 12:29:45.556108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574287328920772768 len:40961 00:08:05.422 [2024-11-15 12:29:45.556124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.422 [2024-11-15 12:29:45.556177] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574427654092226720 len:41121 00:08:05.422 [2024-11-15 12:29:45.556193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.422 #25 NEW cov: 12481 ft: 15070 corp: 20/588b lim: 50 exec/s: 25 rss: 74Mb L: 33/41 MS: 1 ShuffleBytes- 00:08:05.422 [2024-11-15 12:29:45.596391] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:08:05.422 [2024-11-15 12:29:45.596417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.422 [2024-11-15 12:29:45.596467] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:41121 00:08:05.422 [2024-11-15 12:29:45.596483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.422 [2024-11-15 12:29:45.596532] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 00:08:05.422 [2024-11-15 12:29:45.596547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.422 [2024-11-15 12:29:45.596595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11574427654092267680 len:41121 00:08:05.422 [2024-11-15 12:29:45.596610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.422 [2024-11-15 12:29:45.596660] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:11574427928970174624 len:41121 00:08:05.422 [2024-11-15 12:29:45.596675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:05.422 #26 NEW cov: 12481 ft: 15115 corp: 21/638b lim: 50 exec/s: 26 rss: 74Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:08:05.422 [2024-11-15 12:29:45.636265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:178299040 len:1 00:08:05.422 [2024-11-15 12:29:45.636292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.422 [2024-11-15 12:29:45.636358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574427651397427360 len:40994 00:08:05.422 [2024-11-15 12:29:45.636375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.422 [2024-11-15 12:29:45.636428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574427651397386400 len:41121 00:08:05.422 [2024-11-15 12:29:45.636443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.422 #27 NEW cov: 12481 ft: 15157 corp: 22/674b lim: 50 exec/s: 27 rss: 74Mb L: 36/50 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:08:05.422 [2024-11-15 12:29:45.676278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12008468690649130662 len:42663 00:08:05.422 [2024-11-15 12:29:45.676306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.422 [2024-11-15 12:29:45.676370] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12008468691120727718 len:42663 00:08:05.422 [2024-11-15 12:29:45.676387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.422 #28 NEW cov: 12481 ft: 15171 corp: 23/702b lim: 50 exec/s: 28 rss: 74Mb L: 28/50 MS: 1 ChangeBit- 00:08:05.422 [2024-11-15 12:29:45.716642] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574372675994296480 len:41121 00:08:05.422 [2024-11-15 12:29:45.716670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.422 [2024-11-15 12:29:45.716732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574427108631421088 len:1 00:08:05.423 [2024-11-15 12:29:45.716752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.423 [2024-11-15 12:29:45.716802] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574427513968959648 len:41121 00:08:05.423 [2024-11-15 12:29:45.716818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.423 [2024-11-15 12:29:45.716870] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:15736977371739241787 len:16641 00:08:05.423 [2024-11-15 12:29:45.716885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.423 #29 NEW cov: 12481 ft: 15191 corp: 24/744b lim: 50 exec/s: 29 rss: 74Mb L: 42/50 MS: 1 InsertByte- 00:08:05.681 [2024-11-15 12:29:45.776809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574427654092267680 len:41121 00:08:05.682 [2024-11-15 12:29:45.776836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.682 [2024-11-15 12:29:45.776897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11538503722994802848 len:161 00:08:05.682 [2024-11-15 12:29:45.776913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.682 [2024-11-15 12:29:45.776964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11532206407585669280 len:41121 00:08:05.682 [2024-11-15 12:29:45.776981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.682 [2024-11-15 12:29:45.777033] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:45212608015433728 len:41121 00:08:05.682 [2024-11-15 12:29:45.777048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.682 #30 NEW cov: 12481 ft: 15199 corp: 25/792b lim: 50 exec/s: 30 rss: 74Mb L: 48/50 MS: 1 CrossOver- 00:08:05.682 [2024-11-15 12:29:45.836943] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574372675994296480 len:41121 00:08:05.682 [2024-11-15 12:29:45.836970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.682 [2024-11-15 12:29:45.837034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574427108631421088 len:1 00:08:05.682 [2024-11-15 12:29:45.837050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.682 [2024-11-15 12:29:45.837104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574427513968959648 len:41121 00:08:05.682 [2024-11-15 12:29:45.837119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.682 [2024-11-15 12:29:45.837171] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11557636917539414176 len:16641 00:08:05.682 [2024-11-15 12:29:45.837187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.682 #31 NEW cov: 12481 ft: 15208 corp: 26/834b lim: 50 exec/s: 31 rss: 74Mb L: 42/50 MS: 1 CrossOver- 00:08:05.682 [2024-11-15 12:29:45.896902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574427651575656864 len:41121 00:08:05.682 [2024-11-15 12:29:45.896928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.682 [2024-11-15 12:29:45.896987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:176611750060288 len:41121 00:08:05.682 [2024-11-15 12:29:45.897002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.682 #32 NEW cov: 12481 ft: 15226 corp: 27/863b lim: 50 exec/s: 32 rss: 74Mb L: 29/50 MS: 1 InsertByte- 00:08:05.682 [2024-11-15 12:29:45.937262] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574372675994296480 len:41121 00:08:05.682 [2024-11-15 12:29:45.937289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.682 [2024-11-15 12:29:45.937361] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:45212062562951210 len:1 00:08:05.682 [2024-11-15 12:29:45.937377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.682 [2024-11-15 12:29:45.937428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574427513968959648 len:41121 00:08:05.682 [2024-11-15 12:29:45.937443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.682 [2024-11-15 12:29:45.937494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:15736977371739241787 len:16641 00:08:05.682 [2024-11-15 12:29:45.937511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.682 #33 NEW cov: 12481 ft: 15231 corp: 28/905b lim: 50 exec/s: 33 rss: 74Mb L: 42/50 MS: 1 ChangeBinInt- 00:08:05.682 [2024-11-15 12:29:45.977232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574427651575685280 len:41121 00:08:05.682 [2024-11-15 12:29:45.977261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.682 [2024-11-15 12:29:45.977306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574287328920772768 len:1 00:08:05.682 [2024-11-15 12:29:45.977328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.682 [2024-11-15 12:29:45.977394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 00:08:05.682 [2024-11-15 12:29:45.977410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.682 #34 NEW cov: 12481 ft: 15261 corp: 29/938b lim: 50 exec/s: 34 rss: 74Mb L: 33/50 MS: 1 ChangeByte- 00:08:05.682 [2024-11-15 12:29:46.017609] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:178257920 len:1 00:08:05.682 [2024-11-15 12:29:46.017635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.682 [2024-11-15 12:29:46.017704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:161 00:08:05.682 [2024-11-15 12:29:46.017718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.682 [2024-11-15 12:29:46.017769] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 00:08:05.682 [2024-11-15 12:29:46.017785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.682 [2024-11-15 12:29:46.017836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:689889648673 len:41121 00:08:05.682 [2024-11-15 12:29:46.017851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.682 [2024-11-15 12:29:46.017908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:11574427928970174624 len:41121 00:08:05.682 [2024-11-15 12:29:46.017924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:05.940 #35 NEW cov: 12481 ft: 15292 corp: 30/988b lim: 50 exec/s: 35 rss: 74Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:08:05.940 [2024-11-15 12:29:46.057363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574427651575685280 len:41121 00:08:05.940 [2024-11-15 12:29:46.057391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.940 [2024-11-15 12:29:46.057440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 00:08:05.940 [2024-11-15 12:29:46.057468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.940 #36 NEW cov: 12481 ft: 15341 corp: 31/1011b lim: 50 exec/s: 36 rss: 75Mb L: 23/50 MS: 1 EraseBytes- 00:08:05.940 [2024-11-15 12:29:46.117763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574372675994296480 len:41121 00:08:05.940 [2024-11-15 12:29:46.117790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.940 [2024-11-15 12:29:46.117855] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574427108631421088 len:1 00:08:05.940 [2024-11-15 12:29:46.117870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.940 [2024-11-15 12:29:46.117922] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574427513968959648 len:41121 00:08:05.940 [2024-11-15 12:29:46.117938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.940 [2024-11-15 12:29:46.117990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446743664091791359 len:41121 00:08:05.940 [2024-11-15 12:29:46.118006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.940 #37 NEW cov: 12481 ft: 15346 corp: 32/1058b lim: 50 exec/s: 37 rss: 75Mb L: 47/50 MS: 1 InsertRepeatedBytes- 00:08:05.940 [2024-11-15 12:29:46.177581] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574427654092267680 len:41054 00:08:05.940 [2024-11-15 12:29:46.177609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.940 #38 NEW cov: 12481 ft: 15380 corp: 33/1073b lim: 50 exec/s: 38 rss: 75Mb L: 15/50 MS: 1 EraseBytes- 00:08:05.940 [2024-11-15 12:29:46.238093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11574427651575685280 len:41121 00:08:05.940 [2024-11-15 12:29:46.238121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.940 [2024-11-15 12:29:46.238170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11574287328920772768 len:1 00:08:05.940 [2024-11-15 12:29:46.238185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.940 [2024-11-15 12:29:46.238236] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41216 00:08:05.940 [2024-11-15 12:29:46.238252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.940 [2024-11-15 12:29:46.238305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:05.940 [2024-11-15 12:29:46.238326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.940 #39 NEW cov: 12481 ft: 15423 corp: 34/1117b lim: 50 exec/s: 39 rss: 75Mb L: 44/50 MS: 1 InsertRepeatedBytes- 00:08:06.198 [2024-11-15 12:29:46.298292] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:178257920 len:1 00:08:06.198 [2024-11-15 12:29:46.298323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.199 [2024-11-15 12:29:46.298410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:161 00:08:06.199 [2024-11-15 12:29:46.298426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.199 [2024-11-15 12:29:46.298477] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 00:08:06.199 [2024-11-15 12:29:46.298492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.199 [2024-11-15 12:29:46.298543] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:689889648673 len:41121 00:08:06.199 [2024-11-15 12:29:46.298558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.199 [2024-11-15 12:29:46.298611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:11574426966897500320 len:41121 00:08:06.199 [2024-11-15 12:29:46.298626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:06.199 #40 NEW cov: 12481 ft: 15434 corp: 35/1167b lim: 50 exec/s: 20 rss: 75Mb L: 50/50 MS: 1 CopyPart- 00:08:06.199 #40 DONE cov: 12481 ft: 15434 corp: 35/1167b lim: 50 exec/s: 20 rss: 75Mb 00:08:06.199 ###### Recommended dictionary. ###### 00:08:06.199 "\275;\332d\371\212A\000" # Uses: 1 00:08:06.199 "\000\000\000\000\000\000\000\000" # Uses: 0 00:08:06.199 ###### End of recommended dictionary. ###### 00:08:06.199 Done 40 runs in 2 second(s) 00:08:06.199 12:29:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:08:06.199 12:29:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:06.199 12:29:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:06.199 12:29:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:08:06.199 12:29:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:08:06.199 12:29:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:06.199 12:29:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:06.199 12:29:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:06.199 12:29:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:08:06.199 12:29:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:06.199 12:29:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:06.199 12:29:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:08:06.199 12:29:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:08:06.199 12:29:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:06.199 12:29:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:08:06.199 12:29:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:06.199 12:29:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:06.199 12:29:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:06.199 12:29:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:08:06.199 [2024-11-15 12:29:46.496157] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:06.199 [2024-11-15 12:29:46.496228] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid676962 ] 00:08:06.457 [2024-11-15 12:29:46.709255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.457 [2024-11-15 12:29:46.747754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.715 [2024-11-15 12:29:46.807273] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.716 [2024-11-15 12:29:46.823501] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:06.716 INFO: Running with entropic power schedule (0xFF, 100). 00:08:06.716 INFO: Seed: 851354453 00:08:06.716 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:08:06.716 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:08:06.716 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:06.716 INFO: A corpus is not provided, starting from an empty corpus 00:08:06.716 #2 INITED exec/s: 0 rss: 66Mb 00:08:06.716 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:06.716 This may also happen if the target rejected all inputs we tried so far 00:08:06.716 [2024-11-15 12:29:46.878365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:06.716 [2024-11-15 12:29:46.878402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.973 NEW_FUNC[1/712]: 0x45def8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:08:06.973 NEW_FUNC[2/712]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:06.973 #16 NEW cov: 12283 ft: 12282 corp: 2/32b lim: 90 exec/s: 0 rss: 74Mb L: 31/31 MS: 4 CrossOver-CopyPart-EraseBytes-InsertRepeatedBytes- 00:08:06.973 [2024-11-15 12:29:47.239233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:06.974 [2024-11-15 12:29:47.239278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.974 NEW_FUNC[1/5]: 0x1950bf8 in nvme_complete_register_operations /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:725 00:08:06.974 NEW_FUNC[2/5]: 0x1963e98 in nvme_ctrlr_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1214 00:08:06.974 #17 NEW cov: 12425 ft: 12903 corp: 3/63b lim: 90 exec/s: 0 rss: 74Mb L: 31/31 MS: 1 ChangeBinInt- 00:08:07.233 [2024-11-15 12:29:47.329467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.233 [2024-11-15 12:29:47.329501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.233 [2024-11-15 12:29:47.329538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.233 [2024-11-15 12:29:47.329556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.233 #24 NEW cov: 12431 ft: 13967 corp: 4/106b lim: 90 exec/s: 0 rss: 74Mb L: 43/43 MS: 2 CrossOver-CrossOver- 00:08:07.233 [2024-11-15 12:29:47.389492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.233 [2024-11-15 12:29:47.389524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.233 #25 NEW cov: 12516 ft: 14350 corp: 5/137b lim: 90 exec/s: 0 rss: 74Mb L: 31/43 MS: 1 CopyPart- 00:08:07.233 [2024-11-15 12:29:47.449681] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.233 [2024-11-15 12:29:47.449714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.233 #26 NEW cov: 12516 ft: 14447 corp: 6/168b lim: 90 exec/s: 0 rss: 74Mb L: 31/43 MS: 1 ChangeBinInt- 00:08:07.233 [2024-11-15 12:29:47.509931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.233 [2024-11-15 12:29:47.509967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.492 #27 NEW cov: 12516 ft: 14564 corp: 7/198b lim: 90 exec/s: 0 rss: 74Mb L: 30/43 MS: 1 EraseBytes- 00:08:07.492 [2024-11-15 12:29:47.600047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.492 [2024-11-15 12:29:47.600078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.492 #28 NEW cov: 12516 ft: 14697 corp: 8/229b lim: 90 exec/s: 0 rss: 74Mb L: 31/43 MS: 1 ChangeByte- 00:08:07.492 [2024-11-15 12:29:47.650167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.492 [2024-11-15 12:29:47.650199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.492 #29 NEW cov: 12516 ft: 14721 corp: 9/260b lim: 90 exec/s: 0 rss: 74Mb L: 31/43 MS: 1 ChangeBinInt- 00:08:07.492 [2024-11-15 12:29:47.700380] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.492 [2024-11-15 12:29:47.700412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.492 [2024-11-15 12:29:47.700448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.492 [2024-11-15 12:29:47.700466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.492 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:07.492 #30 NEW cov: 12533 ft: 14801 corp: 10/306b lim: 90 exec/s: 0 rss: 74Mb L: 46/46 MS: 1 InsertRepeatedBytes- 00:08:07.492 [2024-11-15 12:29:47.790614] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.492 [2024-11-15 12:29:47.790647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.492 [2024-11-15 12:29:47.790697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.492 [2024-11-15 12:29:47.790716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.751 #31 NEW cov: 12533 ft: 14841 corp: 11/349b lim: 90 exec/s: 31 rss: 75Mb L: 43/46 MS: 1 CopyPart- 00:08:07.751 [2024-11-15 12:29:47.880801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.751 [2024-11-15 12:29:47.880833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.751 #32 NEW cov: 12533 ft: 14877 corp: 12/374b lim: 90 exec/s: 32 rss: 75Mb L: 25/46 MS: 1 EraseBytes- 00:08:07.751 [2024-11-15 12:29:47.941134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.751 [2024-11-15 12:29:47.941165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.751 [2024-11-15 12:29:47.941204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.751 [2024-11-15 12:29:47.941223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.751 [2024-11-15 12:29:47.941253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:07.751 [2024-11-15 12:29:47.941269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.751 [2024-11-15 12:29:47.941298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:07.751 [2024-11-15 12:29:47.941322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:07.751 #38 NEW cov: 12533 ft: 15369 corp: 13/454b lim: 90 exec/s: 38 rss: 75Mb L: 80/80 MS: 1 CrossOver- 00:08:07.751 [2024-11-15 12:29:48.001056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.751 [2024-11-15 12:29:48.001101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.751 #39 NEW cov: 12533 ft: 15392 corp: 14/478b lim: 90 exec/s: 39 rss: 75Mb L: 24/80 MS: 1 EraseBytes- 00:08:07.751 [2024-11-15 12:29:48.091363] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.751 [2024-11-15 12:29:48.091394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.081 #40 NEW cov: 12533 ft: 15432 corp: 15/508b lim: 90 exec/s: 40 rss: 75Mb L: 30/80 MS: 1 EraseBytes- 00:08:08.081 [2024-11-15 12:29:48.181750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.081 [2024-11-15 12:29:48.181781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.081 [2024-11-15 12:29:48.181814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.081 [2024-11-15 12:29:48.181832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.081 [2024-11-15 12:29:48.181864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.081 [2024-11-15 12:29:48.181880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.081 [2024-11-15 12:29:48.181909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:08.081 [2024-11-15 12:29:48.181926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:08.081 #41 NEW cov: 12533 ft: 15448 corp: 16/588b lim: 90 exec/s: 41 rss: 75Mb L: 80/80 MS: 1 CrossOver- 00:08:08.081 [2024-11-15 12:29:48.271870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.081 [2024-11-15 12:29:48.271900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.081 [2024-11-15 12:29:48.271935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.081 [2024-11-15 12:29:48.271952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.081 #42 NEW cov: 12533 ft: 15460 corp: 17/635b lim: 90 exec/s: 42 rss: 75Mb L: 47/80 MS: 1 InsertByte- 00:08:08.081 [2024-11-15 12:29:48.362072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.081 [2024-11-15 12:29:48.362101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.081 [2024-11-15 12:29:48.362155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.081 [2024-11-15 12:29:48.362174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.081 #45 NEW cov: 12533 ft: 15527 corp: 18/688b lim: 90 exec/s: 45 rss: 75Mb L: 53/80 MS: 3 CopyPart-InsertRepeatedBytes-InsertRepeatedBytes- 00:08:08.081 [2024-11-15 12:29:48.422276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.081 [2024-11-15 12:29:48.422307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.081 [2024-11-15 12:29:48.422350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.081 [2024-11-15 12:29:48.422369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.339 #46 NEW cov: 12533 ft: 15551 corp: 19/735b lim: 90 exec/s: 46 rss: 75Mb L: 47/80 MS: 1 ChangeBinInt- 00:08:08.339 [2024-11-15 12:29:48.512465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.339 [2024-11-15 12:29:48.512497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.339 [2024-11-15 12:29:48.512547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.339 [2024-11-15 12:29:48.512565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.339 #47 NEW cov: 12533 ft: 15618 corp: 20/782b lim: 90 exec/s: 47 rss: 75Mb L: 47/80 MS: 1 InsertRepeatedBytes- 00:08:08.339 [2024-11-15 12:29:48.602731] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.339 [2024-11-15 12:29:48.602763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.339 [2024-11-15 12:29:48.602813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.339 [2024-11-15 12:29:48.602832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.339 #48 NEW cov: 12533 ft: 15635 corp: 21/832b lim: 90 exec/s: 48 rss: 75Mb L: 50/80 MS: 1 InsertRepeatedBytes- 00:08:08.598 [2024-11-15 12:29:48.692887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.598 [2024-11-15 12:29:48.692918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.598 #49 NEW cov: 12533 ft: 15651 corp: 22/863b lim: 90 exec/s: 49 rss: 75Mb L: 31/80 MS: 1 ChangeBit- 00:08:08.598 [2024-11-15 12:29:48.743007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.598 [2024-11-15 12:29:48.743038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.598 #50 NEW cov: 12540 ft: 15690 corp: 23/888b lim: 90 exec/s: 50 rss: 75Mb L: 25/80 MS: 1 InsertByte- 00:08:08.598 [2024-11-15 12:29:48.833243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.598 [2024-11-15 12:29:48.833274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.598 #51 NEW cov: 12540 ft: 15699 corp: 24/919b lim: 90 exec/s: 25 rss: 75Mb L: 31/80 MS: 1 CopyPart- 00:08:08.598 #51 DONE cov: 12540 ft: 15699 corp: 24/919b lim: 90 exec/s: 25 rss: 75Mb 00:08:08.598 Done 51 runs in 2 second(s) 00:08:08.857 12:29:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:08:08.857 12:29:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:08.857 12:29:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:08.857 12:29:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:08:08.857 12:29:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:08:08.857 12:29:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:08.857 12:29:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:08.857 12:29:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:08.857 12:29:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:08:08.857 12:29:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:08.857 12:29:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:08.857 12:29:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:08:08.857 12:29:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:08:08.857 12:29:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:08.857 12:29:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:08:08.857 12:29:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:08.857 12:29:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:08.857 12:29:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:08.857 12:29:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:08:08.857 [2024-11-15 12:29:49.027397] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:08.857 [2024-11-15 12:29:49.027469] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid677280 ] 00:08:09.116 [2024-11-15 12:29:49.267488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.116 [2024-11-15 12:29:49.305844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.116 [2024-11-15 12:29:49.365230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.116 [2024-11-15 12:29:49.381461] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:08:09.116 INFO: Running with entropic power schedule (0xFF, 100). 00:08:09.116 INFO: Seed: 3406335841 00:08:09.116 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:08:09.116 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:08:09.116 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:09.116 INFO: A corpus is not provided, starting from an empty corpus 00:08:09.116 #2 INITED exec/s: 0 rss: 65Mb 00:08:09.116 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:09.116 This may also happen if the target rejected all inputs we tried so far 00:08:09.116 [2024-11-15 12:29:49.430125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:09.116 [2024-11-15 12:29:49.430156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.116 [2024-11-15 12:29:49.430200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:09.116 [2024-11-15 12:29:49.430216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.116 [2024-11-15 12:29:49.430271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:09.116 [2024-11-15 12:29:49.430288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.116 [2024-11-15 12:29:49.430344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:09.116 [2024-11-15 12:29:49.430360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.635 NEW_FUNC[1/717]: 0x461128 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:08:09.635 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:09.635 #5 NEW cov: 12287 ft: 12286 corp: 2/42b lim: 50 exec/s: 0 rss: 73Mb L: 41/41 MS: 3 ChangeBit-ChangeByte-InsertRepeatedBytes- 00:08:09.635 [2024-11-15 12:29:49.761090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:09.635 [2024-11-15 12:29:49.761134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.635 [2024-11-15 12:29:49.761206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:09.635 [2024-11-15 12:29:49.761225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.635 [2024-11-15 12:29:49.761283] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:09.635 [2024-11-15 12:29:49.761301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.635 [2024-11-15 12:29:49.761378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:09.635 [2024-11-15 12:29:49.761395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.635 #11 NEW cov: 12400 ft: 12910 corp: 3/83b lim: 50 exec/s: 0 rss: 73Mb L: 41/41 MS: 1 CopyPart- 00:08:09.635 [2024-11-15 12:29:49.821154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:09.636 [2024-11-15 12:29:49.821183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.636 [2024-11-15 12:29:49.821232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:09.636 [2024-11-15 12:29:49.821248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.636 [2024-11-15 12:29:49.821302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:09.636 [2024-11-15 12:29:49.821322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.636 [2024-11-15 12:29:49.821393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:09.636 [2024-11-15 12:29:49.821409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.636 #12 NEW cov: 12406 ft: 13205 corp: 4/124b lim: 50 exec/s: 0 rss: 73Mb L: 41/41 MS: 1 CMP- DE: "\000\002"- 00:08:09.636 [2024-11-15 12:29:49.881337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:09.636 [2024-11-15 12:29:49.881368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.636 [2024-11-15 12:29:49.881433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:09.636 [2024-11-15 12:29:49.881450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.636 [2024-11-15 12:29:49.881509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:09.636 [2024-11-15 12:29:49.881525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.636 [2024-11-15 12:29:49.881582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:09.636 [2024-11-15 12:29:49.881597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.636 #13 NEW cov: 12491 ft: 13547 corp: 5/167b lim: 50 exec/s: 0 rss: 73Mb L: 43/43 MS: 1 PersAutoDict- DE: "\000\002"- 00:08:09.636 [2024-11-15 12:29:49.941487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:09.636 [2024-11-15 12:29:49.941514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.636 [2024-11-15 12:29:49.941584] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:09.636 [2024-11-15 12:29:49.941601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.636 [2024-11-15 12:29:49.941658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:09.636 [2024-11-15 12:29:49.941674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.636 [2024-11-15 12:29:49.941733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:09.636 [2024-11-15 12:29:49.941749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.636 #14 NEW cov: 12491 ft: 13638 corp: 6/210b lim: 50 exec/s: 0 rss: 73Mb L: 43/43 MS: 1 PersAutoDict- DE: "\000\002"- 00:08:09.896 [2024-11-15 12:29:49.981671] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:09.896 [2024-11-15 12:29:49.981699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.896 [2024-11-15 12:29:49.981748] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:09.896 [2024-11-15 12:29:49.981764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.896 [2024-11-15 12:29:49.981822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:09.896 [2024-11-15 12:29:49.981838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.896 [2024-11-15 12:29:49.981895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:09.896 [2024-11-15 12:29:49.981911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.896 #15 NEW cov: 12491 ft: 13759 corp: 7/251b lim: 50 exec/s: 0 rss: 73Mb L: 41/43 MS: 1 CrossOver- 00:08:09.896 [2024-11-15 12:29:50.021749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:09.896 [2024-11-15 12:29:50.021781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.896 [2024-11-15 12:29:50.021825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:09.896 [2024-11-15 12:29:50.021842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.896 [2024-11-15 12:29:50.021899] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:09.896 [2024-11-15 12:29:50.021919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.896 [2024-11-15 12:29:50.021975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:09.896 [2024-11-15 12:29:50.021991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.896 #16 NEW cov: 12491 ft: 13845 corp: 8/292b lim: 50 exec/s: 0 rss: 73Mb L: 41/43 MS: 1 PersAutoDict- DE: "\000\002"- 00:08:09.896 [2024-11-15 12:29:50.061869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:09.896 [2024-11-15 12:29:50.061904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.896 [2024-11-15 12:29:50.061941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:09.896 [2024-11-15 12:29:50.061957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.896 [2024-11-15 12:29:50.062012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:09.896 [2024-11-15 12:29:50.062027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.896 [2024-11-15 12:29:50.062085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:09.896 [2024-11-15 12:29:50.062101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.896 #17 NEW cov: 12491 ft: 13868 corp: 9/333b lim: 50 exec/s: 0 rss: 73Mb L: 41/43 MS: 1 ShuffleBytes- 00:08:09.896 [2024-11-15 12:29:50.101965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:09.896 [2024-11-15 12:29:50.102000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.896 [2024-11-15 12:29:50.102037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:09.896 [2024-11-15 12:29:50.102054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.896 [2024-11-15 12:29:50.102112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:09.896 [2024-11-15 12:29:50.102129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.896 [2024-11-15 12:29:50.102187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:09.896 [2024-11-15 12:29:50.102203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.896 #18 NEW cov: 12491 ft: 13938 corp: 10/374b lim: 50 exec/s: 0 rss: 73Mb L: 41/43 MS: 1 ChangeByte- 00:08:09.896 [2024-11-15 12:29:50.142063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:09.896 [2024-11-15 12:29:50.142093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.896 [2024-11-15 12:29:50.142151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:09.896 [2024-11-15 12:29:50.142168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.896 [2024-11-15 12:29:50.142224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:09.896 [2024-11-15 12:29:50.142241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.896 [2024-11-15 12:29:50.142297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:09.896 [2024-11-15 12:29:50.142320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.896 #19 NEW cov: 12491 ft: 14003 corp: 11/415b lim: 50 exec/s: 0 rss: 73Mb L: 41/43 MS: 1 ShuffleBytes- 00:08:09.896 [2024-11-15 12:29:50.182180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:09.896 [2024-11-15 12:29:50.182209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.896 [2024-11-15 12:29:50.182268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:09.897 [2024-11-15 12:29:50.182284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.897 [2024-11-15 12:29:50.182345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:09.897 [2024-11-15 12:29:50.182362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.897 [2024-11-15 12:29:50.182418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:09.897 [2024-11-15 12:29:50.182433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.897 #20 NEW cov: 12491 ft: 14026 corp: 12/456b lim: 50 exec/s: 0 rss: 73Mb L: 41/43 MS: 1 ChangeBinInt- 00:08:09.897 [2024-11-15 12:29:50.222295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:09.897 [2024-11-15 12:29:50.222328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.897 [2024-11-15 12:29:50.222396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:09.897 [2024-11-15 12:29:50.222413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.897 [2024-11-15 12:29:50.222471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:09.897 [2024-11-15 12:29:50.222487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.897 [2024-11-15 12:29:50.222544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:09.897 [2024-11-15 12:29:50.222560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.156 #21 NEW cov: 12491 ft: 14068 corp: 13/497b lim: 50 exec/s: 0 rss: 73Mb L: 41/43 MS: 1 PersAutoDict- DE: "\000\002"- 00:08:10.156 [2024-11-15 12:29:50.262263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.156 [2024-11-15 12:29:50.262292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.156 [2024-11-15 12:29:50.262347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.156 [2024-11-15 12:29:50.262364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.156 [2024-11-15 12:29:50.262438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.156 [2024-11-15 12:29:50.262454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.156 #22 NEW cov: 12491 ft: 14462 corp: 14/530b lim: 50 exec/s: 0 rss: 73Mb L: 33/43 MS: 1 EraseBytes- 00:08:10.156 [2024-11-15 12:29:50.302548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.156 [2024-11-15 12:29:50.302577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.156 [2024-11-15 12:29:50.302619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.156 [2024-11-15 12:29:50.302635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.156 [2024-11-15 12:29:50.302690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.156 [2024-11-15 12:29:50.302720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.156 [2024-11-15 12:29:50.302788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.156 [2024-11-15 12:29:50.302804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.156 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:10.156 #23 NEW cov: 12514 ft: 14571 corp: 15/579b lim: 50 exec/s: 0 rss: 74Mb L: 49/49 MS: 1 InsertRepeatedBytes- 00:08:10.156 [2024-11-15 12:29:50.362563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.156 [2024-11-15 12:29:50.362591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.156 [2024-11-15 12:29:50.362637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.156 [2024-11-15 12:29:50.362654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.156 [2024-11-15 12:29:50.362711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.156 [2024-11-15 12:29:50.362727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.156 #24 NEW cov: 12514 ft: 14642 corp: 16/609b lim: 50 exec/s: 0 rss: 74Mb L: 30/49 MS: 1 CrossOver- 00:08:10.156 [2024-11-15 12:29:50.422944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.156 [2024-11-15 12:29:50.422971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.156 [2024-11-15 12:29:50.423024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.156 [2024-11-15 12:29:50.423041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.156 [2024-11-15 12:29:50.423096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.156 [2024-11-15 12:29:50.423113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.156 [2024-11-15 12:29:50.423167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.156 [2024-11-15 12:29:50.423184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.156 #25 NEW cov: 12514 ft: 14684 corp: 17/655b lim: 50 exec/s: 25 rss: 74Mb L: 46/49 MS: 1 InsertRepeatedBytes- 00:08:10.156 [2024-11-15 12:29:50.483067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.156 [2024-11-15 12:29:50.483095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.156 [2024-11-15 12:29:50.483151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.156 [2024-11-15 12:29:50.483165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.156 [2024-11-15 12:29:50.483224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.156 [2024-11-15 12:29:50.483240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.156 [2024-11-15 12:29:50.483297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.156 [2024-11-15 12:29:50.483313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.416 #26 NEW cov: 12514 ft: 14738 corp: 18/698b lim: 50 exec/s: 26 rss: 74Mb L: 43/49 MS: 1 ShuffleBytes- 00:08:10.416 [2024-11-15 12:29:50.542927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.416 [2024-11-15 12:29:50.542954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.416 [2024-11-15 12:29:50.543007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.416 [2024-11-15 12:29:50.543024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.416 #27 NEW cov: 12514 ft: 15097 corp: 19/720b lim: 50 exec/s: 27 rss: 74Mb L: 22/49 MS: 1 CrossOver- 00:08:10.416 [2024-11-15 12:29:50.603348] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.416 [2024-11-15 12:29:50.603374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.416 [2024-11-15 12:29:50.603445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.416 [2024-11-15 12:29:50.603461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.416 [2024-11-15 12:29:50.603516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.416 [2024-11-15 12:29:50.603532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.416 [2024-11-15 12:29:50.603587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.416 [2024-11-15 12:29:50.603603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.416 #28 NEW cov: 12514 ft: 15109 corp: 20/762b lim: 50 exec/s: 28 rss: 74Mb L: 42/49 MS: 1 InsertByte- 00:08:10.416 [2024-11-15 12:29:50.663544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.416 [2024-11-15 12:29:50.663571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.416 [2024-11-15 12:29:50.663637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.416 [2024-11-15 12:29:50.663653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.416 [2024-11-15 12:29:50.663708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.416 [2024-11-15 12:29:50.663724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.416 [2024-11-15 12:29:50.663780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.416 [2024-11-15 12:29:50.663796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.416 #29 NEW cov: 12514 ft: 15134 corp: 21/811b lim: 50 exec/s: 29 rss: 74Mb L: 49/49 MS: 1 ChangeBinInt- 00:08:10.416 [2024-11-15 12:29:50.723698] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.416 [2024-11-15 12:29:50.723728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.416 [2024-11-15 12:29:50.723767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.416 [2024-11-15 12:29:50.723783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.416 [2024-11-15 12:29:50.723838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.416 [2024-11-15 12:29:50.723854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.416 [2024-11-15 12:29:50.723912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.416 [2024-11-15 12:29:50.723927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.675 #30 NEW cov: 12514 ft: 15153 corp: 22/852b lim: 50 exec/s: 30 rss: 74Mb L: 41/49 MS: 1 CopyPart- 00:08:10.676 [2024-11-15 12:29:50.783872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.676 [2024-11-15 12:29:50.783901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.676 [2024-11-15 12:29:50.783952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.676 [2024-11-15 12:29:50.783968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.676 [2024-11-15 12:29:50.784026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.676 [2024-11-15 12:29:50.784042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.676 [2024-11-15 12:29:50.784098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.676 [2024-11-15 12:29:50.784114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.676 #31 NEW cov: 12514 ft: 15173 corp: 23/900b lim: 50 exec/s: 31 rss: 74Mb L: 48/49 MS: 1 InsertRepeatedBytes- 00:08:10.676 [2024-11-15 12:29:50.823935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.676 [2024-11-15 12:29:50.823962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.676 [2024-11-15 12:29:50.824014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.676 [2024-11-15 12:29:50.824030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.676 [2024-11-15 12:29:50.824087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.676 [2024-11-15 12:29:50.824103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.676 [2024-11-15 12:29:50.824157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.676 [2024-11-15 12:29:50.824172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.676 #32 NEW cov: 12514 ft: 15237 corp: 24/945b lim: 50 exec/s: 32 rss: 74Mb L: 45/49 MS: 1 CMP- DE: "\000\000\000\010"- 00:08:10.676 [2024-11-15 12:29:50.884169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.676 [2024-11-15 12:29:50.884199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.676 [2024-11-15 12:29:50.884250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.676 [2024-11-15 12:29:50.884269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.676 [2024-11-15 12:29:50.884329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.676 [2024-11-15 12:29:50.884345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.676 [2024-11-15 12:29:50.884402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.676 [2024-11-15 12:29:50.884418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.676 #33 NEW cov: 12514 ft: 15239 corp: 25/987b lim: 50 exec/s: 33 rss: 74Mb L: 42/49 MS: 1 InsertByte- 00:08:10.676 [2024-11-15 12:29:50.924103] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.676 [2024-11-15 12:29:50.924130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.676 [2024-11-15 12:29:50.924175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.676 [2024-11-15 12:29:50.924191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.676 [2024-11-15 12:29:50.924247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.676 [2024-11-15 12:29:50.924263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.676 #34 NEW cov: 12514 ft: 15291 corp: 26/1023b lim: 50 exec/s: 34 rss: 74Mb L: 36/49 MS: 1 EraseBytes- 00:08:10.676 [2024-11-15 12:29:50.964348] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.676 [2024-11-15 12:29:50.964376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.676 [2024-11-15 12:29:50.964442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.676 [2024-11-15 12:29:50.964458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.676 [2024-11-15 12:29:50.964513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.676 [2024-11-15 12:29:50.964528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.676 [2024-11-15 12:29:50.964582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.676 [2024-11-15 12:29:50.964598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.676 #35 NEW cov: 12514 ft: 15293 corp: 27/1064b lim: 50 exec/s: 35 rss: 74Mb L: 41/49 MS: 1 ChangeBit- 00:08:10.676 [2024-11-15 12:29:51.004327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.676 [2024-11-15 12:29:51.004354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.676 [2024-11-15 12:29:51.004420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.676 [2024-11-15 12:29:51.004436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.676 [2024-11-15 12:29:51.004492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.676 [2024-11-15 12:29:51.004508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.936 #36 NEW cov: 12514 ft: 15295 corp: 28/1094b lim: 50 exec/s: 36 rss: 74Mb L: 30/49 MS: 1 ChangeBit- 00:08:10.936 [2024-11-15 12:29:51.064676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.936 [2024-11-15 12:29:51.064705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.936 [2024-11-15 12:29:51.064771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.936 [2024-11-15 12:29:51.064788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.936 [2024-11-15 12:29:51.064841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.936 [2024-11-15 12:29:51.064858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.936 [2024-11-15 12:29:51.064914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.936 [2024-11-15 12:29:51.064931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.936 #37 NEW cov: 12514 ft: 15304 corp: 29/1135b lim: 50 exec/s: 37 rss: 74Mb L: 41/49 MS: 1 ChangeByte- 00:08:10.936 [2024-11-15 12:29:51.124730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.936 [2024-11-15 12:29:51.124758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.936 [2024-11-15 12:29:51.124808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.936 [2024-11-15 12:29:51.124823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.936 [2024-11-15 12:29:51.124879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.936 [2024-11-15 12:29:51.124895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.936 #38 NEW cov: 12514 ft: 15329 corp: 30/1168b lim: 50 exec/s: 38 rss: 74Mb L: 33/49 MS: 1 CopyPart- 00:08:10.936 [2024-11-15 12:29:51.164938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.936 [2024-11-15 12:29:51.164965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.936 [2024-11-15 12:29:51.165018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.936 [2024-11-15 12:29:51.165035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.936 [2024-11-15 12:29:51.165089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.936 [2024-11-15 12:29:51.165104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.936 [2024-11-15 12:29:51.165158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.936 [2024-11-15 12:29:51.165173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.936 #39 NEW cov: 12514 ft: 15340 corp: 31/1212b lim: 50 exec/s: 39 rss: 74Mb L: 44/49 MS: 1 CrossOver- 00:08:10.936 [2024-11-15 12:29:51.205130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.936 [2024-11-15 12:29:51.205157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.936 [2024-11-15 12:29:51.205207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.936 [2024-11-15 12:29:51.205223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.936 [2024-11-15 12:29:51.205282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.936 [2024-11-15 12:29:51.205297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.936 [2024-11-15 12:29:51.205356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.936 [2024-11-15 12:29:51.205371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.936 #40 NEW cov: 12514 ft: 15354 corp: 32/1254b lim: 50 exec/s: 40 rss: 74Mb L: 42/49 MS: 1 InsertByte- 00:08:10.936 [2024-11-15 12:29:51.245233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.936 [2024-11-15 12:29:51.245260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.936 [2024-11-15 12:29:51.245323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.936 [2024-11-15 12:29:51.245339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.936 [2024-11-15 12:29:51.245395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.936 [2024-11-15 12:29:51.245410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.936 [2024-11-15 12:29:51.245468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.936 [2024-11-15 12:29:51.245484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.195 #41 NEW cov: 12514 ft: 15387 corp: 33/1295b lim: 50 exec/s: 41 rss: 74Mb L: 41/49 MS: 1 PersAutoDict- DE: "\000\000\000\010"- 00:08:11.195 [2024-11-15 12:29:51.305365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.195 [2024-11-15 12:29:51.305392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.195 [2024-11-15 12:29:51.305465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.196 [2024-11-15 12:29:51.305482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.196 [2024-11-15 12:29:51.305540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.196 [2024-11-15 12:29:51.305556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.196 [2024-11-15 12:29:51.305612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.196 [2024-11-15 12:29:51.305629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.196 #42 NEW cov: 12514 ft: 15412 corp: 34/1340b lim: 50 exec/s: 42 rss: 75Mb L: 45/49 MS: 1 ChangeBit- 00:08:11.196 [2024-11-15 12:29:51.365390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.196 [2024-11-15 12:29:51.365418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.196 [2024-11-15 12:29:51.365483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.196 [2024-11-15 12:29:51.365500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.196 [2024-11-15 12:29:51.365558] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.196 [2024-11-15 12:29:51.365575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.196 #43 NEW cov: 12514 ft: 15420 corp: 35/1371b lim: 50 exec/s: 43 rss: 75Mb L: 31/49 MS: 1 InsertByte- 00:08:11.196 [2024-11-15 12:29:51.405652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.196 [2024-11-15 12:29:51.405679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.196 [2024-11-15 12:29:51.405743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.196 [2024-11-15 12:29:51.405760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.196 [2024-11-15 12:29:51.405816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.196 [2024-11-15 12:29:51.405831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.196 [2024-11-15 12:29:51.405887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.196 [2024-11-15 12:29:51.405904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.196 #44 NEW cov: 12514 ft: 15423 corp: 36/1416b lim: 50 exec/s: 22 rss: 75Mb L: 45/49 MS: 1 ChangeBinInt- 00:08:11.196 #44 DONE cov: 12514 ft: 15423 corp: 36/1416b lim: 50 exec/s: 22 rss: 75Mb 00:08:11.196 ###### Recommended dictionary. ###### 00:08:11.196 "\000\002" # Uses: 4 00:08:11.196 "\000\000\000\010" # Uses: 1 00:08:11.196 ###### End of recommended dictionary. ###### 00:08:11.196 Done 44 runs in 2 second(s) 00:08:11.455 12:29:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:08:11.455 12:29:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:11.455 12:29:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:11.455 12:29:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:08:11.455 12:29:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:08:11.455 12:29:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:11.455 12:29:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:11.455 12:29:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:11.455 12:29:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:08:11.455 12:29:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:11.455 12:29:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:11.455 12:29:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:08:11.455 12:29:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:08:11.455 12:29:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:11.455 12:29:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:08:11.455 12:29:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:11.455 12:29:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:11.455 12:29:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:11.455 12:29:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:08:11.455 [2024-11-15 12:29:51.605121] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:11.455 [2024-11-15 12:29:51.605201] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid677566 ] 00:08:11.715 [2024-11-15 12:29:51.830490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.715 [2024-11-15 12:29:51.869878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.715 [2024-11-15 12:29:51.929442] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.715 [2024-11-15 12:29:51.945675] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:08:11.715 INFO: Running with entropic power schedule (0xFF, 100). 00:08:11.715 INFO: Seed: 1678372398 00:08:11.715 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:08:11.715 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:08:11.715 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:11.715 INFO: A corpus is not provided, starting from an empty corpus 00:08:11.715 #2 INITED exec/s: 0 rss: 66Mb 00:08:11.715 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:11.715 This may also happen if the target rejected all inputs we tried so far 00:08:11.715 [2024-11-15 12:29:52.000531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:11.715 [2024-11-15 12:29:52.000568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.234 NEW_FUNC[1/717]: 0x4633f8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:08:12.234 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:12.234 #7 NEW cov: 12313 ft: 12312 corp: 2/22b lim: 85 exec/s: 0 rss: 73Mb L: 21/21 MS: 5 ChangeBit-CrossOver-CrossOver-EraseBytes-InsertRepeatedBytes- 00:08:12.234 [2024-11-15 12:29:52.351517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.234 [2024-11-15 12:29:52.351563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.234 [2024-11-15 12:29:52.351615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:12.234 [2024-11-15 12:29:52.351633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.234 [2024-11-15 12:29:52.351662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:12.234 [2024-11-15 12:29:52.351679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.234 #8 NEW cov: 12426 ft: 13579 corp: 3/86b lim: 85 exec/s: 0 rss: 74Mb L: 64/64 MS: 1 InsertRepeatedBytes- 00:08:12.234 [2024-11-15 12:29:52.411406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.234 [2024-11-15 12:29:52.411436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.234 #9 NEW cov: 12432 ft: 13797 corp: 4/104b lim: 85 exec/s: 0 rss: 74Mb L: 18/64 MS: 1 CrossOver- 00:08:12.234 [2024-11-15 12:29:52.461555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.234 [2024-11-15 12:29:52.461586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.234 #15 NEW cov: 12517 ft: 14113 corp: 5/125b lim: 85 exec/s: 0 rss: 74Mb L: 21/64 MS: 1 ChangeByte- 00:08:12.234 [2024-11-15 12:29:52.551803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.234 [2024-11-15 12:29:52.551837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.493 #21 NEW cov: 12517 ft: 14204 corp: 6/143b lim: 85 exec/s: 0 rss: 74Mb L: 18/64 MS: 1 ChangeBit- 00:08:12.493 [2024-11-15 12:29:52.642010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.493 [2024-11-15 12:29:52.642040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.493 #22 NEW cov: 12517 ft: 14288 corp: 7/164b lim: 85 exec/s: 0 rss: 74Mb L: 21/64 MS: 1 ChangeBit- 00:08:12.493 [2024-11-15 12:29:52.702121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.493 [2024-11-15 12:29:52.702151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.493 #23 NEW cov: 12517 ft: 14414 corp: 8/182b lim: 85 exec/s: 0 rss: 74Mb L: 18/64 MS: 1 ChangeBit- 00:08:12.493 [2024-11-15 12:29:52.792432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.493 [2024-11-15 12:29:52.792461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.493 #24 NEW cov: 12517 ft: 14439 corp: 9/200b lim: 85 exec/s: 0 rss: 74Mb L: 18/64 MS: 1 ChangeBit- 00:08:12.751 [2024-11-15 12:29:52.842550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.751 [2024-11-15 12:29:52.842583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.751 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:12.751 #25 NEW cov: 12534 ft: 14625 corp: 10/219b lim: 85 exec/s: 0 rss: 74Mb L: 19/64 MS: 1 InsertByte- 00:08:12.751 [2024-11-15 12:29:52.932810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.751 [2024-11-15 12:29:52.932843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.751 #26 NEW cov: 12534 ft: 14678 corp: 11/239b lim: 85 exec/s: 26 rss: 74Mb L: 20/64 MS: 1 InsertByte- 00:08:12.751 [2024-11-15 12:29:53.023009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.751 [2024-11-15 12:29:53.023041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.010 #27 NEW cov: 12534 ft: 14699 corp: 12/259b lim: 85 exec/s: 27 rss: 74Mb L: 20/64 MS: 1 ShuffleBytes- 00:08:13.010 [2024-11-15 12:29:53.113229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.010 [2024-11-15 12:29:53.113260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.010 #28 NEW cov: 12534 ft: 14846 corp: 13/279b lim: 85 exec/s: 28 rss: 74Mb L: 20/64 MS: 1 CopyPart- 00:08:13.010 [2024-11-15 12:29:53.203477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.010 [2024-11-15 12:29:53.203507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.010 #29 NEW cov: 12534 ft: 14891 corp: 14/301b lim: 85 exec/s: 29 rss: 74Mb L: 22/64 MS: 1 InsertByte- 00:08:13.010 [2024-11-15 12:29:53.263749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.010 [2024-11-15 12:29:53.263780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.010 [2024-11-15 12:29:53.263813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.010 [2024-11-15 12:29:53.263831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.010 [2024-11-15 12:29:53.263867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.010 [2024-11-15 12:29:53.263883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.010 #35 NEW cov: 12534 ft: 14902 corp: 15/353b lim: 85 exec/s: 35 rss: 74Mb L: 52/64 MS: 1 EraseBytes- 00:08:13.010 [2024-11-15 12:29:53.353920] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.010 [2024-11-15 12:29:53.353951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.269 #36 NEW cov: 12534 ft: 14947 corp: 16/382b lim: 85 exec/s: 36 rss: 74Mb L: 29/64 MS: 1 CMP- DE: "\377\003\000\000\000\000\000\000"- 00:08:13.269 [2024-11-15 12:29:53.403998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.269 [2024-11-15 12:29:53.404028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.269 #41 NEW cov: 12534 ft: 15006 corp: 17/406b lim: 85 exec/s: 41 rss: 74Mb L: 24/64 MS: 5 InsertRepeatedBytes-ChangeBinInt-ChangeBinInt-EraseBytes-InsertRepeatedBytes- 00:08:13.269 [2024-11-15 12:29:53.454076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.269 [2024-11-15 12:29:53.454105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.269 #42 NEW cov: 12534 ft: 15036 corp: 18/424b lim: 85 exec/s: 42 rss: 74Mb L: 18/64 MS: 1 ChangeBinInt- 00:08:13.269 [2024-11-15 12:29:53.514325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.269 [2024-11-15 12:29:53.514355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.269 [2024-11-15 12:29:53.514387] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.269 [2024-11-15 12:29:53.514405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.269 #43 NEW cov: 12534 ft: 15364 corp: 19/474b lim: 85 exec/s: 43 rss: 74Mb L: 50/64 MS: 1 CrossOver- 00:08:13.269 [2024-11-15 12:29:53.574491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.269 [2024-11-15 12:29:53.574521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.269 [2024-11-15 12:29:53.574556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.269 [2024-11-15 12:29:53.574574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.528 #44 NEW cov: 12534 ft: 15370 corp: 20/508b lim: 85 exec/s: 44 rss: 74Mb L: 34/64 MS: 1 CrossOver- 00:08:13.528 [2024-11-15 12:29:53.664776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.528 [2024-11-15 12:29:53.664806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.528 [2024-11-15 12:29:53.664854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.528 [2024-11-15 12:29:53.664871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.528 [2024-11-15 12:29:53.664902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.528 [2024-11-15 12:29:53.664919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.528 #45 NEW cov: 12534 ft: 15389 corp: 21/568b lim: 85 exec/s: 45 rss: 75Mb L: 60/64 MS: 1 CMP- DE: "\001A\212\376\004\353\350x"- 00:08:13.528 [2024-11-15 12:29:53.755034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.528 [2024-11-15 12:29:53.755062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.528 [2024-11-15 12:29:53.755109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.528 [2024-11-15 12:29:53.755128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.528 [2024-11-15 12:29:53.755158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.528 [2024-11-15 12:29:53.755175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.528 #47 NEW cov: 12534 ft: 15391 corp: 22/627b lim: 85 exec/s: 47 rss: 75Mb L: 59/64 MS: 2 EraseBytes-InsertRepeatedBytes- 00:08:13.528 [2024-11-15 12:29:53.845172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.528 [2024-11-15 12:29:53.845202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.787 #51 NEW cov: 12541 ft: 15393 corp: 23/649b lim: 85 exec/s: 51 rss: 75Mb L: 22/64 MS: 4 EraseBytes-ChangeBinInt-PersAutoDict-CMP- DE: "\377\003\000\000\000\000\000\000"-"?\000\000\000\000\000\000\000"- 00:08:13.787 [2024-11-15 12:29:53.905295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.787 [2024-11-15 12:29:53.905333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.787 #52 NEW cov: 12541 ft: 15434 corp: 24/670b lim: 85 exec/s: 52 rss: 75Mb L: 21/64 MS: 1 ChangeBinInt- 00:08:13.787 [2024-11-15 12:29:53.965597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.787 [2024-11-15 12:29:53.965628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.787 [2024-11-15 12:29:53.965662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.788 [2024-11-15 12:29:53.965680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.788 [2024-11-15 12:29:53.965710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.788 [2024-11-15 12:29:53.965727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.788 #58 NEW cov: 12541 ft: 15474 corp: 25/722b lim: 85 exec/s: 29 rss: 75Mb L: 52/64 MS: 1 ChangeBinInt- 00:08:13.788 #58 DONE cov: 12541 ft: 15474 corp: 25/722b lim: 85 exec/s: 29 rss: 75Mb 00:08:13.788 ###### Recommended dictionary. ###### 00:08:13.788 "\377\003\000\000\000\000\000\000" # Uses: 1 00:08:13.788 "\001A\212\376\004\353\350x" # Uses: 0 00:08:13.788 "?\000\000\000\000\000\000\000" # Uses: 0 00:08:13.788 ###### End of recommended dictionary. ###### 00:08:13.788 Done 58 runs in 2 second(s) 00:08:13.788 12:29:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:08:13.788 12:29:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:13.788 12:29:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:13.788 12:29:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:08:13.788 12:29:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:08:13.788 12:29:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:13.788 12:29:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:13.788 12:29:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:13.788 12:29:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:08:13.788 12:29:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:13.788 12:29:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:13.788 12:29:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:08:13.788 12:29:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:08:13.788 12:29:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:13.788 12:29:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:08:13.788 12:29:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:14.046 12:29:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:14.046 12:29:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:14.046 12:29:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:08:14.046 [2024-11-15 12:29:54.161404] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:14.046 [2024-11-15 12:29:54.161474] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid677879 ] 00:08:14.046 [2024-11-15 12:29:54.381906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.305 [2024-11-15 12:29:54.421786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.305 [2024-11-15 12:29:54.481270] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.305 [2024-11-15 12:29:54.497512] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:08:14.305 INFO: Running with entropic power schedule (0xFF, 100). 00:08:14.305 INFO: Seed: 4229411553 00:08:14.305 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:08:14.305 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:08:14.305 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:14.305 INFO: A corpus is not provided, starting from an empty corpus 00:08:14.305 #2 INITED exec/s: 0 rss: 66Mb 00:08:14.305 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:14.305 This may also happen if the target rejected all inputs we tried so far 00:08:14.305 [2024-11-15 12:29:54.553091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:14.305 [2024-11-15 12:29:54.553122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.305 [2024-11-15 12:29:54.553169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:14.305 [2024-11-15 12:29:54.553185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.305 [2024-11-15 12:29:54.553239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:14.305 [2024-11-15 12:29:54.553253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.305 [2024-11-15 12:29:54.553307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:14.305 [2024-11-15 12:29:54.553328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.568 NEW_FUNC[1/716]: 0x466638 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:08:14.568 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:14.568 #21 NEW cov: 12233 ft: 12245 corp: 2/24b lim: 25 exec/s: 0 rss: 73Mb L: 23/23 MS: 4 InsertByte-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:08:14.568 [2024-11-15 12:29:54.883634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:14.568 [2024-11-15 12:29:54.883671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.568 #22 NEW cov: 12359 ft: 13507 corp: 3/31b lim: 25 exec/s: 0 rss: 74Mb L: 7/23 MS: 1 InsertRepeatedBytes- 00:08:14.833 [2024-11-15 12:29:54.924027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:14.833 [2024-11-15 12:29:54.924057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.833 [2024-11-15 12:29:54.924104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:14.833 [2024-11-15 12:29:54.924121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.833 [2024-11-15 12:29:54.924175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:14.833 [2024-11-15 12:29:54.924191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.833 [2024-11-15 12:29:54.924248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:14.833 [2024-11-15 12:29:54.924264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.833 #23 NEW cov: 12365 ft: 13687 corp: 4/55b lim: 25 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 InsertByte- 00:08:14.833 [2024-11-15 12:29:54.984189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:14.833 [2024-11-15 12:29:54.984217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.833 [2024-11-15 12:29:54.984281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:14.833 [2024-11-15 12:29:54.984298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.833 [2024-11-15 12:29:54.984359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:14.833 [2024-11-15 12:29:54.984375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.833 [2024-11-15 12:29:54.984430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:14.833 [2024-11-15 12:29:54.984445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.833 #24 NEW cov: 12450 ft: 13965 corp: 5/78b lim: 25 exec/s: 0 rss: 74Mb L: 23/24 MS: 1 CopyPart- 00:08:14.833 [2024-11-15 12:29:55.024059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:14.833 [2024-11-15 12:29:55.024087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.833 [2024-11-15 12:29:55.024125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:14.833 [2024-11-15 12:29:55.024141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.833 #25 NEW cov: 12450 ft: 14320 corp: 6/91b lim: 25 exec/s: 0 rss: 74Mb L: 13/24 MS: 1 EraseBytes- 00:08:14.833 [2024-11-15 12:29:55.064036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:14.833 [2024-11-15 12:29:55.064064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.833 #26 NEW cov: 12450 ft: 14413 corp: 7/99b lim: 25 exec/s: 0 rss: 74Mb L: 8/24 MS: 1 InsertByte- 00:08:14.833 [2024-11-15 12:29:55.124570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:14.833 [2024-11-15 12:29:55.124597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.833 [2024-11-15 12:29:55.124662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:14.833 [2024-11-15 12:29:55.124680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.833 [2024-11-15 12:29:55.124735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:14.833 [2024-11-15 12:29:55.124751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.833 [2024-11-15 12:29:55.124807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:14.833 [2024-11-15 12:29:55.124823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.833 #27 NEW cov: 12450 ft: 14533 corp: 8/122b lim: 25 exec/s: 0 rss: 74Mb L: 23/24 MS: 1 ChangeBit- 00:08:14.833 [2024-11-15 12:29:55.164333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:14.833 [2024-11-15 12:29:55.164360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.093 #28 NEW cov: 12450 ft: 14580 corp: 9/131b lim: 25 exec/s: 0 rss: 74Mb L: 9/24 MS: 1 CrossOver- 00:08:15.093 [2024-11-15 12:29:55.204759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.093 [2024-11-15 12:29:55.204786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.093 [2024-11-15 12:29:55.204838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.093 [2024-11-15 12:29:55.204853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.093 [2024-11-15 12:29:55.204906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.093 [2024-11-15 12:29:55.204922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.093 [2024-11-15 12:29:55.204977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.093 [2024-11-15 12:29:55.204993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.093 #29 NEW cov: 12450 ft: 14599 corp: 10/154b lim: 25 exec/s: 0 rss: 74Mb L: 23/24 MS: 1 ChangeBit- 00:08:15.093 [2024-11-15 12:29:55.244564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.093 [2024-11-15 12:29:55.244592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.093 #30 NEW cov: 12450 ft: 14647 corp: 11/163b lim: 25 exec/s: 0 rss: 74Mb L: 9/24 MS: 1 ChangeBit- 00:08:15.093 [2024-11-15 12:29:55.305086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.093 [2024-11-15 12:29:55.305112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.093 [2024-11-15 12:29:55.305184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.093 [2024-11-15 12:29:55.305202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.093 [2024-11-15 12:29:55.305257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.093 [2024-11-15 12:29:55.305273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.093 [2024-11-15 12:29:55.305332] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.093 [2024-11-15 12:29:55.305348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.093 #31 NEW cov: 12450 ft: 14666 corp: 12/186b lim: 25 exec/s: 0 rss: 74Mb L: 23/24 MS: 1 ShuffleBytes- 00:08:15.093 [2024-11-15 12:29:55.365349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.093 [2024-11-15 12:29:55.365377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.093 [2024-11-15 12:29:55.365433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.093 [2024-11-15 12:29:55.365449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.093 [2024-11-15 12:29:55.365506] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.093 [2024-11-15 12:29:55.365520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.093 [2024-11-15 12:29:55.365575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.093 [2024-11-15 12:29:55.365590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.093 [2024-11-15 12:29:55.365646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:15.093 [2024-11-15 12:29:55.365661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:15.093 #32 NEW cov: 12450 ft: 14791 corp: 13/211b lim: 25 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 CopyPart- 00:08:15.093 [2024-11-15 12:29:55.425403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.093 [2024-11-15 12:29:55.425431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.093 [2024-11-15 12:29:55.425478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.093 [2024-11-15 12:29:55.425494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.093 [2024-11-15 12:29:55.425565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.093 [2024-11-15 12:29:55.425582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.093 [2024-11-15 12:29:55.425638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.093 [2024-11-15 12:29:55.425654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.352 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:15.352 #33 NEW cov: 12473 ft: 14854 corp: 14/234b lim: 25 exec/s: 0 rss: 74Mb L: 23/25 MS: 1 ChangeByte- 00:08:15.352 [2024-11-15 12:29:55.465486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.352 [2024-11-15 12:29:55.465514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.352 [2024-11-15 12:29:55.465580] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.352 [2024-11-15 12:29:55.465596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.352 [2024-11-15 12:29:55.465651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.352 [2024-11-15 12:29:55.465666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.352 [2024-11-15 12:29:55.465719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.352 [2024-11-15 12:29:55.465735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.352 #34 NEW cov: 12473 ft: 14878 corp: 15/258b lim: 25 exec/s: 0 rss: 74Mb L: 24/25 MS: 1 CrossOver- 00:08:15.352 [2024-11-15 12:29:55.505263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.352 [2024-11-15 12:29:55.505291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.352 #35 NEW cov: 12473 ft: 14891 corp: 16/263b lim: 25 exec/s: 0 rss: 74Mb L: 5/25 MS: 1 EraseBytes- 00:08:15.352 [2024-11-15 12:29:55.545389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.352 [2024-11-15 12:29:55.545418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.352 #36 NEW cov: 12473 ft: 14895 corp: 17/270b lim: 25 exec/s: 36 rss: 74Mb L: 7/25 MS: 1 CrossOver- 00:08:15.352 [2024-11-15 12:29:55.585484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.352 [2024-11-15 12:29:55.585512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.352 #37 NEW cov: 12473 ft: 14950 corp: 18/279b lim: 25 exec/s: 37 rss: 74Mb L: 9/25 MS: 1 CopyPart- 00:08:15.352 [2024-11-15 12:29:55.645799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.352 [2024-11-15 12:29:55.645827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.352 [2024-11-15 12:29:55.645868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.352 [2024-11-15 12:29:55.645884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.352 #38 NEW cov: 12473 ft: 14966 corp: 19/289b lim: 25 exec/s: 38 rss: 74Mb L: 10/25 MS: 1 InsertByte- 00:08:15.352 [2024-11-15 12:29:55.686134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.352 [2024-11-15 12:29:55.686162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.352 [2024-11-15 12:29:55.686211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.352 [2024-11-15 12:29:55.686227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.352 [2024-11-15 12:29:55.686295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.352 [2024-11-15 12:29:55.686311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.352 [2024-11-15 12:29:55.686373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.352 [2024-11-15 12:29:55.686387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.612 #39 NEW cov: 12473 ft: 14967 corp: 20/312b lim: 25 exec/s: 39 rss: 74Mb L: 23/25 MS: 1 ShuffleBytes- 00:08:15.612 [2024-11-15 12:29:55.726241] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.612 [2024-11-15 12:29:55.726269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.612 [2024-11-15 12:29:55.726327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.612 [2024-11-15 12:29:55.726342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.612 [2024-11-15 12:29:55.726397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.612 [2024-11-15 12:29:55.726411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.612 [2024-11-15 12:29:55.726468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.612 [2024-11-15 12:29:55.726484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.612 #40 NEW cov: 12473 ft: 14991 corp: 21/335b lim: 25 exec/s: 40 rss: 74Mb L: 23/25 MS: 1 ShuffleBytes- 00:08:15.612 [2024-11-15 12:29:55.786430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.612 [2024-11-15 12:29:55.786458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.612 [2024-11-15 12:29:55.786512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.612 [2024-11-15 12:29:55.786526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.612 [2024-11-15 12:29:55.786581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.612 [2024-11-15 12:29:55.786597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.612 [2024-11-15 12:29:55.786652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.612 [2024-11-15 12:29:55.786667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.612 #41 NEW cov: 12473 ft: 15007 corp: 22/359b lim: 25 exec/s: 41 rss: 74Mb L: 24/25 MS: 1 InsertRepeatedBytes- 00:08:15.612 [2024-11-15 12:29:55.826553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.612 [2024-11-15 12:29:55.826580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.612 [2024-11-15 12:29:55.826634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.612 [2024-11-15 12:29:55.826650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.612 [2024-11-15 12:29:55.826705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.612 [2024-11-15 12:29:55.826720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.612 [2024-11-15 12:29:55.826774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.612 [2024-11-15 12:29:55.826790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.612 #42 NEW cov: 12473 ft: 15047 corp: 23/382b lim: 25 exec/s: 42 rss: 74Mb L: 23/25 MS: 1 ChangeBit- 00:08:15.612 [2024-11-15 12:29:55.886676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.612 [2024-11-15 12:29:55.886709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.612 [2024-11-15 12:29:55.886744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.612 [2024-11-15 12:29:55.886761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.612 [2024-11-15 12:29:55.886815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.612 [2024-11-15 12:29:55.886830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.612 [2024-11-15 12:29:55.886886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.612 [2024-11-15 12:29:55.886902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.612 #43 NEW cov: 12473 ft: 15054 corp: 24/405b lim: 25 exec/s: 43 rss: 75Mb L: 23/25 MS: 1 ChangeBit- 00:08:15.612 [2024-11-15 12:29:55.946822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.612 [2024-11-15 12:29:55.946848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.612 [2024-11-15 12:29:55.946902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.612 [2024-11-15 12:29:55.946918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.612 [2024-11-15 12:29:55.946990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.612 [2024-11-15 12:29:55.947005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.612 [2024-11-15 12:29:55.947061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.612 [2024-11-15 12:29:55.947077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.871 #44 NEW cov: 12473 ft: 15060 corp: 25/428b lim: 25 exec/s: 44 rss: 75Mb L: 23/25 MS: 1 ChangeByte- 00:08:15.871 [2024-11-15 12:29:56.007132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.871 [2024-11-15 12:29:56.007158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.871 [2024-11-15 12:29:56.007229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.871 [2024-11-15 12:29:56.007247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.871 [2024-11-15 12:29:56.007302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.871 [2024-11-15 12:29:56.007321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.871 [2024-11-15 12:29:56.007389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.871 [2024-11-15 12:29:56.007405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.872 [2024-11-15 12:29:56.007462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:15.872 [2024-11-15 12:29:56.007477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:15.872 #45 NEW cov: 12473 ft: 15066 corp: 26/453b lim: 25 exec/s: 45 rss: 75Mb L: 25/25 MS: 1 CrossOver- 00:08:15.872 [2024-11-15 12:29:56.047117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.872 [2024-11-15 12:29:56.047148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.872 [2024-11-15 12:29:56.047209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.872 [2024-11-15 12:29:56.047226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.872 [2024-11-15 12:29:56.047282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.872 [2024-11-15 12:29:56.047298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.872 [2024-11-15 12:29:56.047355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.872 [2024-11-15 12:29:56.047372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.872 #46 NEW cov: 12473 ft: 15075 corp: 27/476b lim: 25 exec/s: 46 rss: 75Mb L: 23/25 MS: 1 ShuffleBytes- 00:08:15.872 [2024-11-15 12:29:56.107057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.872 [2024-11-15 12:29:56.107084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.872 [2024-11-15 12:29:56.107123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.872 [2024-11-15 12:29:56.107137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.872 #47 NEW cov: 12473 ft: 15105 corp: 28/486b lim: 25 exec/s: 47 rss: 75Mb L: 10/25 MS: 1 ShuffleBytes- 00:08:15.872 [2024-11-15 12:29:56.167583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.872 [2024-11-15 12:29:56.167611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.872 [2024-11-15 12:29:56.167667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.872 [2024-11-15 12:29:56.167682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.872 [2024-11-15 12:29:56.167740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.872 [2024-11-15 12:29:56.167756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.872 [2024-11-15 12:29:56.167811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.872 [2024-11-15 12:29:56.167825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.872 [2024-11-15 12:29:56.167881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:15.872 [2024-11-15 12:29:56.167896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:15.872 #48 NEW cov: 12473 ft: 15136 corp: 29/511b lim: 25 exec/s: 48 rss: 75Mb L: 25/25 MS: 1 ChangeBit- 00:08:16.131 [2024-11-15 12:29:56.227257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.131 [2024-11-15 12:29:56.227285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.131 #49 NEW cov: 12473 ft: 15176 corp: 30/520b lim: 25 exec/s: 49 rss: 75Mb L: 9/25 MS: 1 InsertByte- 00:08:16.131 [2024-11-15 12:29:56.287685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.131 [2024-11-15 12:29:56.287713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.131 [2024-11-15 12:29:56.287772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.131 [2024-11-15 12:29:56.287789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.131 [2024-11-15 12:29:56.287848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.131 [2024-11-15 12:29:56.287863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.131 #50 NEW cov: 12473 ft: 15382 corp: 31/539b lim: 25 exec/s: 50 rss: 75Mb L: 19/25 MS: 1 EraseBytes- 00:08:16.131 [2024-11-15 12:29:56.347996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.131 [2024-11-15 12:29:56.348024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.131 [2024-11-15 12:29:56.348072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.131 [2024-11-15 12:29:56.348087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.131 [2024-11-15 12:29:56.348141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.131 [2024-11-15 12:29:56.348157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.131 [2024-11-15 12:29:56.348212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.131 [2024-11-15 12:29:56.348228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.131 #51 NEW cov: 12473 ft: 15385 corp: 32/562b lim: 25 exec/s: 51 rss: 75Mb L: 23/25 MS: 1 InsertRepeatedBytes- 00:08:16.131 [2024-11-15 12:29:56.408108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.131 [2024-11-15 12:29:56.408134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.131 [2024-11-15 12:29:56.408207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.131 [2024-11-15 12:29:56.408223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.131 [2024-11-15 12:29:56.408280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.131 [2024-11-15 12:29:56.408296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.131 [2024-11-15 12:29:56.408356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.131 [2024-11-15 12:29:56.408373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.131 #52 NEW cov: 12473 ft: 15393 corp: 33/586b lim: 25 exec/s: 52 rss: 75Mb L: 24/25 MS: 1 CopyPart- 00:08:16.131 [2024-11-15 12:29:56.448257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.131 [2024-11-15 12:29:56.448284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.131 [2024-11-15 12:29:56.448366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.131 [2024-11-15 12:29:56.448381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.131 [2024-11-15 12:29:56.448439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.131 [2024-11-15 12:29:56.448454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.131 [2024-11-15 12:29:56.448516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.131 [2024-11-15 12:29:56.448531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.391 #53 NEW cov: 12473 ft: 15398 corp: 34/610b lim: 25 exec/s: 53 rss: 75Mb L: 24/25 MS: 1 ShuffleBytes- 00:08:16.391 [2024-11-15 12:29:56.508540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.391 [2024-11-15 12:29:56.508567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.391 [2024-11-15 12:29:56.508637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.391 [2024-11-15 12:29:56.508653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.391 [2024-11-15 12:29:56.508708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.391 [2024-11-15 12:29:56.508724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.391 [2024-11-15 12:29:56.508780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.391 [2024-11-15 12:29:56.508794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.391 [2024-11-15 12:29:56.508849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:16.391 [2024-11-15 12:29:56.508866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:16.391 #54 NEW cov: 12473 ft: 15406 corp: 35/635b lim: 25 exec/s: 27 rss: 75Mb L: 25/25 MS: 1 ShuffleBytes- 00:08:16.391 #54 DONE cov: 12473 ft: 15406 corp: 35/635b lim: 25 exec/s: 27 rss: 75Mb 00:08:16.391 Done 54 runs in 2 second(s) 00:08:16.391 12:29:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:08:16.391 12:29:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:16.391 12:29:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:16.391 12:29:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:08:16.391 12:29:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:08:16.391 12:29:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:16.391 12:29:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:16.391 12:29:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:16.391 12:29:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:08:16.391 12:29:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:16.391 12:29:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:16.391 12:29:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:08:16.391 12:29:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:08:16.391 12:29:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:16.391 12:29:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:08:16.391 12:29:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:16.391 12:29:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:16.391 12:29:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:16.391 12:29:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:08:16.391 [2024-11-15 12:29:56.690234] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:16.391 [2024-11-15 12:29:56.690334] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid678230 ] 00:08:16.650 [2024-11-15 12:29:56.904595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.651 [2024-11-15 12:29:56.944066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.910 [2024-11-15 12:29:57.003758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.910 [2024-11-15 12:29:57.019991] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:08:16.910 INFO: Running with entropic power schedule (0xFF, 100). 00:08:16.910 INFO: Seed: 2457418757 00:08:16.910 INFO: Loaded 1 modules (387659 inline 8-bit counters): 387659 [0x2c4084c, 0x2c9f297), 00:08:16.910 INFO: Loaded 1 PC tables (387659 PCs): 387659 [0x2c9f298,0x3289748), 00:08:16.910 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:16.910 INFO: A corpus is not provided, starting from an empty corpus 00:08:16.910 #2 INITED exec/s: 0 rss: 66Mb 00:08:16.910 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:16.910 This may also happen if the target rejected all inputs we tried so far 00:08:16.910 [2024-11-15 12:29:57.075581] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.910 [2024-11-15 12:29:57.075613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.910 [2024-11-15 12:29:57.075674] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.910 [2024-11-15 12:29:57.075691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.910 [2024-11-15 12:29:57.075744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.910 [2024-11-15 12:29:57.075760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.170 NEW_FUNC[1/716]: 0x467728 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:08:17.170 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:17.170 #13 NEW cov: 12309 ft: 12317 corp: 2/77b lim: 100 exec/s: 0 rss: 73Mb L: 76/76 MS: 1 InsertRepeatedBytes- 00:08:17.170 [2024-11-15 12:29:57.406502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.170 [2024-11-15 12:29:57.406542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.170 [2024-11-15 12:29:57.406612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.170 [2024-11-15 12:29:57.406628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.170 [2024-11-15 12:29:57.406680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.170 [2024-11-15 12:29:57.406696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.170 NEW_FUNC[1/1]: 0x1fcbbd8 in msg_queue_run_batch /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:833 00:08:17.170 #14 NEW cov: 12431 ft: 12916 corp: 3/153b lim: 100 exec/s: 0 rss: 74Mb L: 76/76 MS: 1 ChangeBinInt- 00:08:17.170 [2024-11-15 12:29:57.466549] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.170 [2024-11-15 12:29:57.466579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.170 [2024-11-15 12:29:57.466632] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.170 [2024-11-15 12:29:57.466648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.170 [2024-11-15 12:29:57.466703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:2561 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.170 [2024-11-15 12:29:57.466719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.170 #15 NEW cov: 12437 ft: 13177 corp: 4/229b lim: 100 exec/s: 0 rss: 74Mb L: 76/76 MS: 1 CrossOver- 00:08:17.170 [2024-11-15 12:29:57.506586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.170 [2024-11-15 12:29:57.506614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.170 [2024-11-15 12:29:57.506677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.170 [2024-11-15 12:29:57.506694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.170 [2024-11-15 12:29:57.506748] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:2561 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.170 [2024-11-15 12:29:57.506764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.430 #16 NEW cov: 12522 ft: 13428 corp: 5/305b lim: 100 exec/s: 0 rss: 74Mb L: 76/76 MS: 1 ChangeBinInt- 00:08:17.430 [2024-11-15 12:29:57.566768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.430 [2024-11-15 12:29:57.566796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.430 [2024-11-15 12:29:57.566857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.430 [2024-11-15 12:29:57.566873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.430 [2024-11-15 12:29:57.566929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:128 len:2561 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.430 [2024-11-15 12:29:57.566944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.430 #17 NEW cov: 12522 ft: 13608 corp: 6/381b lim: 100 exec/s: 0 rss: 74Mb L: 76/76 MS: 1 ChangeBit- 00:08:17.430 [2024-11-15 12:29:57.607000] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:168034304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.430 [2024-11-15 12:29:57.607028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.430 [2024-11-15 12:29:57.607075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.430 [2024-11-15 12:29:57.607094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.430 [2024-11-15 12:29:57.607146] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.430 [2024-11-15 12:29:57.607161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.430 [2024-11-15 12:29:57.607214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:4294901760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.430 [2024-11-15 12:29:57.607228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.430 #23 NEW cov: 12522 ft: 14025 corp: 7/461b lim: 100 exec/s: 0 rss: 74Mb L: 80/80 MS: 1 CMP- DE: "\004\000\000\000"- 00:08:17.430 [2024-11-15 12:29:57.667022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.430 [2024-11-15 12:29:57.667049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.430 [2024-11-15 12:29:57.667096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.430 [2024-11-15 12:29:57.667111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.430 [2024-11-15 12:29:57.667164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.430 [2024-11-15 12:29:57.667180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.430 #24 NEW cov: 12522 ft: 14096 corp: 8/537b lim: 100 exec/s: 0 rss: 74Mb L: 76/80 MS: 1 ChangeBit- 00:08:17.430 [2024-11-15 12:29:57.727211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.430 [2024-11-15 12:29:57.727240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.430 [2024-11-15 12:29:57.727288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.430 [2024-11-15 12:29:57.727304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.430 [2024-11-15 12:29:57.727366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:128 len:2561 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.430 [2024-11-15 12:29:57.727382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.430 #25 NEW cov: 12522 ft: 14148 corp: 9/613b lim: 100 exec/s: 0 rss: 74Mb L: 76/80 MS: 1 ChangeByte- 00:08:17.689 [2024-11-15 12:29:57.787099] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.689 [2024-11-15 12:29:57.787127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.689 #27 NEW cov: 12522 ft: 15056 corp: 10/647b lim: 100 exec/s: 0 rss: 74Mb L: 34/80 MS: 2 CopyPart-CrossOver- 00:08:17.689 [2024-11-15 12:29:57.827480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.689 [2024-11-15 12:29:57.827507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.689 [2024-11-15 12:29:57.827543] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.689 [2024-11-15 12:29:57.827559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.689 [2024-11-15 12:29:57.827616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:128 len:2561 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.689 [2024-11-15 12:29:57.827633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.689 #28 NEW cov: 12522 ft: 15123 corp: 11/723b lim: 100 exec/s: 0 rss: 74Mb L: 76/80 MS: 1 CopyPart- 00:08:17.690 [2024-11-15 12:29:57.867714] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.690 [2024-11-15 12:29:57.867741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.690 [2024-11-15 12:29:57.867812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.690 [2024-11-15 12:29:57.867828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.690 [2024-11-15 12:29:57.867880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:128 len:2561 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.690 [2024-11-15 12:29:57.867895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.690 [2024-11-15 12:29:57.867948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.690 [2024-11-15 12:29:57.867963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.690 #29 NEW cov: 12522 ft: 15149 corp: 12/810b lim: 100 exec/s: 0 rss: 74Mb L: 87/87 MS: 1 CopyPart- 00:08:17.690 [2024-11-15 12:29:57.927754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:134873088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.690 [2024-11-15 12:29:57.927782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.690 [2024-11-15 12:29:57.927822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.690 [2024-11-15 12:29:57.927836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.690 [2024-11-15 12:29:57.927890] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1275068416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.690 [2024-11-15 12:29:57.927906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.690 NEW_FUNC[1/1]: 0x1c350e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:17.690 #31 NEW cov: 12545 ft: 15230 corp: 13/882b lim: 100 exec/s: 0 rss: 74Mb L: 72/87 MS: 2 ChangeBit-CrossOver- 00:08:17.690 [2024-11-15 12:29:57.967560] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17437937757346332672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.690 [2024-11-15 12:29:57.967588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.690 #32 NEW cov: 12545 ft: 15297 corp: 14/916b lim: 100 exec/s: 0 rss: 74Mb L: 34/87 MS: 1 ChangeByte- 00:08:17.690 [2024-11-15 12:29:58.027899] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.690 [2024-11-15 12:29:58.027927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.690 [2024-11-15 12:29:58.027963] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.690 [2024-11-15 12:29:58.027982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.950 #33 NEW cov: 12545 ft: 15695 corp: 15/974b lim: 100 exec/s: 0 rss: 74Mb L: 58/87 MS: 1 EraseBytes- 00:08:17.950 [2024-11-15 12:29:58.068284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4278452224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.950 [2024-11-15 12:29:58.068312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.950 [2024-11-15 12:29:58.068386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.950 [2024-11-15 12:29:58.068403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.950 [2024-11-15 12:29:58.068457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.950 [2024-11-15 12:29:58.068473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.950 [2024-11-15 12:29:58.068526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:4294901760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.950 [2024-11-15 12:29:58.068542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.950 #34 NEW cov: 12545 ft: 15706 corp: 16/1054b lim: 100 exec/s: 34 rss: 74Mb L: 80/87 MS: 1 ChangeByte- 00:08:17.950 [2024-11-15 12:29:58.128500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.950 [2024-11-15 12:29:58.128528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.951 [2024-11-15 12:29:58.128578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.951 [2024-11-15 12:29:58.128593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.951 [2024-11-15 12:29:58.128647] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.951 [2024-11-15 12:29:58.128663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.951 [2024-11-15 12:29:58.128717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:4294901760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.951 [2024-11-15 12:29:58.128733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.951 #35 NEW cov: 12545 ft: 15733 corp: 17/1134b lim: 100 exec/s: 35 rss: 74Mb L: 80/87 MS: 1 CMP- DE: "\000\000\000\006"- 00:08:17.951 [2024-11-15 12:29:58.168425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:738852864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.951 [2024-11-15 12:29:58.168452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.951 [2024-11-15 12:29:58.168506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.951 [2024-11-15 12:29:58.168523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.951 [2024-11-15 12:29:58.168575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.951 [2024-11-15 12:29:58.168591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.951 #36 NEW cov: 12545 ft: 15745 corp: 18/1211b lim: 100 exec/s: 36 rss: 74Mb L: 77/87 MS: 1 InsertByte- 00:08:17.951 [2024-11-15 12:29:58.208700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4278452224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.951 [2024-11-15 12:29:58.208727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.951 [2024-11-15 12:29:58.208780] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.951 [2024-11-15 12:29:58.208793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.951 [2024-11-15 12:29:58.208845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.951 [2024-11-15 12:29:58.208859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.951 [2024-11-15 12:29:58.208911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:42949672960 len:255 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.951 [2024-11-15 12:29:58.208925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.951 #37 NEW cov: 12545 ft: 15793 corp: 19/1304b lim: 100 exec/s: 37 rss: 74Mb L: 93/93 MS: 1 CrossOver- 00:08:17.951 [2024-11-15 12:29:58.268707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.951 [2024-11-15 12:29:58.268736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.951 [2024-11-15 12:29:58.268782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.951 [2024-11-15 12:29:58.268799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.951 [2024-11-15 12:29:58.268850] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:2561 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.951 [2024-11-15 12:29:58.268865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.951 #38 NEW cov: 12545 ft: 15891 corp: 20/1380b lim: 100 exec/s: 38 rss: 74Mb L: 76/93 MS: 1 ChangeByte- 00:08:18.307 [2024-11-15 12:29:58.308534] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:168034304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.307 [2024-11-15 12:29:58.308563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.307 #39 NEW cov: 12545 ft: 15918 corp: 21/1414b lim: 100 exec/s: 39 rss: 74Mb L: 34/93 MS: 1 PersAutoDict- DE: "\004\000\000\000"- 00:08:18.307 [2024-11-15 12:29:58.348822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.307 [2024-11-15 12:29:58.348850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.307 [2024-11-15 12:29:58.348886] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:11673330234144325632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.307 [2024-11-15 12:29:58.348902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.307 #40 NEW cov: 12545 ft: 15926 corp: 22/1473b lim: 100 exec/s: 40 rss: 75Mb L: 59/93 MS: 1 InsertByte- 00:08:18.307 [2024-11-15 12:29:58.409267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4278452224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.307 [2024-11-15 12:29:58.409298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.307 [2024-11-15 12:29:58.409340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.307 [2024-11-15 12:29:58.409356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.307 [2024-11-15 12:29:58.409408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.307 [2024-11-15 12:29:58.409422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.307 [2024-11-15 12:29:58.409476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446742974197989375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.307 [2024-11-15 12:29:58.409492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.307 #41 NEW cov: 12545 ft: 15943 corp: 23/1553b lim: 100 exec/s: 41 rss: 75Mb L: 80/93 MS: 1 ChangeBinInt- 00:08:18.307 [2024-11-15 12:29:58.449387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4278452224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.307 [2024-11-15 12:29:58.449414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.307 [2024-11-15 12:29:58.449484] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.307 [2024-11-15 12:29:58.449499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.307 [2024-11-15 12:29:58.449555] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.307 [2024-11-15 12:29:58.449570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.307 [2024-11-15 12:29:58.449625] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:42949672960 len:255 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.307 [2024-11-15 12:29:58.449641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.307 #42 NEW cov: 12545 ft: 15954 corp: 24/1646b lim: 100 exec/s: 42 rss: 75Mb L: 93/93 MS: 1 ChangeBit- 00:08:18.307 [2024-11-15 12:29:58.509384] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.307 [2024-11-15 12:29:58.509413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.307 [2024-11-15 12:29:58.509456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.307 [2024-11-15 12:29:58.509472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.307 [2024-11-15 12:29:58.509525] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9223372036854775808 len:11 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.307 [2024-11-15 12:29:58.509556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.307 #43 NEW cov: 12545 ft: 15963 corp: 25/1723b lim: 100 exec/s: 43 rss: 75Mb L: 77/93 MS: 1 InsertByte- 00:08:18.307 [2024-11-15 12:29:58.569420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.307 [2024-11-15 12:29:58.569450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.307 [2024-11-15 12:29:58.569528] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.307 [2024-11-15 12:29:58.569545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.307 #44 NEW cov: 12545 ft: 15971 corp: 26/1765b lim: 100 exec/s: 44 rss: 75Mb L: 42/93 MS: 1 CrossOver- 00:08:18.307 [2024-11-15 12:29:58.629474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:113249865695232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.308 [2024-11-15 12:29:58.629510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.623 #45 NEW cov: 12545 ft: 16019 corp: 27/1800b lim: 100 exec/s: 45 rss: 75Mb L: 35/93 MS: 1 InsertByte- 00:08:18.623 [2024-11-15 12:29:58.670059] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4278452224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.623 [2024-11-15 12:29:58.670093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.623 [2024-11-15 12:29:58.670131] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.623 [2024-11-15 12:29:58.670147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.623 [2024-11-15 12:29:58.670201] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.623 [2024-11-15 12:29:58.670216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.623 [2024-11-15 12:29:58.670270] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.623 [2024-11-15 12:29:58.670285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.623 #46 NEW cov: 12545 ft: 16023 corp: 28/1899b lim: 100 exec/s: 46 rss: 75Mb L: 99/99 MS: 1 CopyPart- 00:08:18.623 [2024-11-15 12:29:58.709983] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.623 [2024-11-15 12:29:58.710013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.623 [2024-11-15 12:29:58.710048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.624 [2024-11-15 12:29:58.710064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.624 [2024-11-15 12:29:58.710118] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:128 len:2561 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.624 [2024-11-15 12:29:58.710132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.624 #47 NEW cov: 12545 ft: 16068 corp: 29/1975b lim: 100 exec/s: 47 rss: 75Mb L: 76/99 MS: 1 ChangeBinInt- 00:08:18.624 [2024-11-15 12:29:58.749780] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:113249865695232 len:23809 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.624 [2024-11-15 12:29:58.749808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.624 #48 NEW cov: 12545 ft: 16119 corp: 30/2011b lim: 100 exec/s: 48 rss: 75Mb L: 36/99 MS: 1 InsertByte- 00:08:18.624 [2024-11-15 12:29:58.810422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.624 [2024-11-15 12:29:58.810452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.624 [2024-11-15 12:29:58.810500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.624 [2024-11-15 12:29:58.810516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.624 [2024-11-15 12:29:58.810568] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:549755813888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.624 [2024-11-15 12:29:58.810600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.624 [2024-11-15 12:29:58.810654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.624 [2024-11-15 12:29:58.810670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.624 #49 NEW cov: 12545 ft: 16145 corp: 31/2102b lim: 100 exec/s: 49 rss: 75Mb L: 91/99 MS: 1 CrossOver- 00:08:18.624 [2024-11-15 12:29:58.850370] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.624 [2024-11-15 12:29:58.850398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.624 [2024-11-15 12:29:58.850445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.624 [2024-11-15 12:29:58.850460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.624 [2024-11-15 12:29:58.850514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:128 len:2561 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.624 [2024-11-15 12:29:58.850529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.624 #50 NEW cov: 12545 ft: 16173 corp: 32/2177b lim: 100 exec/s: 50 rss: 75Mb L: 75/99 MS: 1 EraseBytes- 00:08:18.624 [2024-11-15 12:29:58.910536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.624 [2024-11-15 12:29:58.910564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.624 [2024-11-15 12:29:58.910628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.624 [2024-11-15 12:29:58.910645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.624 [2024-11-15 12:29:58.910697] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:2561 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.624 [2024-11-15 12:29:58.910713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.624 #51 NEW cov: 12545 ft: 16174 corp: 33/2253b lim: 100 exec/s: 51 rss: 75Mb L: 76/99 MS: 1 ChangeBit- 00:08:18.624 [2024-11-15 12:29:58.950811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.624 [2024-11-15 12:29:58.950839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.624 [2024-11-15 12:29:58.950896] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:37017820613050368 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.624 [2024-11-15 12:29:58.950912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.624 [2024-11-15 12:29:58.950968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9476562641788044163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.624 [2024-11-15 12:29:58.950985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.624 [2024-11-15 12:29:58.951038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:19456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.624 [2024-11-15 12:29:58.951054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.884 #52 NEW cov: 12545 ft: 16207 corp: 34/2352b lim: 100 exec/s: 52 rss: 75Mb L: 99/99 MS: 1 InsertRepeatedBytes- 00:08:18.884 [2024-11-15 12:29:58.990628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:85899345920 len:1025 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.884 [2024-11-15 12:29:58.990656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.884 [2024-11-15 12:29:58.990694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.884 [2024-11-15 12:29:58.990709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.884 #53 NEW cov: 12545 ft: 16231 corp: 35/2395b lim: 100 exec/s: 53 rss: 75Mb L: 43/99 MS: 1 InsertByte- 00:08:18.884 [2024-11-15 12:29:59.050919] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.884 [2024-11-15 12:29:59.050947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.884 [2024-11-15 12:29:59.051010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.884 [2024-11-15 12:29:59.051026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.884 [2024-11-15 12:29:59.051079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4 len:2561 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.884 [2024-11-15 12:29:59.051094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.884 #54 NEW cov: 12545 ft: 16258 corp: 36/2471b lim: 100 exec/s: 27 rss: 75Mb L: 76/99 MS: 1 CMP- DE: "\000\000\000\004"- 00:08:18.884 #54 DONE cov: 12545 ft: 16258 corp: 36/2471b lim: 100 exec/s: 27 rss: 75Mb 00:08:18.884 ###### Recommended dictionary. ###### 00:08:18.884 "\004\000\000\000" # Uses: 1 00:08:18.884 "\000\000\000\006" # Uses: 0 00:08:18.884 "\000\000\000\004" # Uses: 0 00:08:18.884 ###### End of recommended dictionary. ###### 00:08:18.884 Done 54 runs in 2 second(s) 00:08:18.884 12:29:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:08:18.884 12:29:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:18.884 12:29:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:18.884 12:29:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:08:18.884 00:08:18.884 real 1m5.720s 00:08:18.884 user 1m40.090s 00:08:18.884 sys 0m9.271s 00:08:18.884 12:29:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.884 12:29:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:18.884 ************************************ 00:08:18.884 END TEST nvmf_llvm_fuzz 00:08:18.884 ************************************ 00:08:19.145 12:29:59 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:08:19.145 12:29:59 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:08:19.145 12:29:59 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:19.145 12:29:59 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.145 12:29:59 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.145 12:29:59 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:19.145 ************************************ 00:08:19.145 START TEST vfio_llvm_fuzz 00:08:19.145 ************************************ 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:19.145 * Looking for test storage... 00:08:19.145 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:19.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.145 --rc genhtml_branch_coverage=1 00:08:19.145 --rc genhtml_function_coverage=1 00:08:19.145 --rc genhtml_legend=1 00:08:19.145 --rc geninfo_all_blocks=1 00:08:19.145 --rc geninfo_unexecuted_blocks=1 00:08:19.145 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:19.145 ' 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:19.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.145 --rc genhtml_branch_coverage=1 00:08:19.145 --rc genhtml_function_coverage=1 00:08:19.145 --rc genhtml_legend=1 00:08:19.145 --rc geninfo_all_blocks=1 00:08:19.145 --rc geninfo_unexecuted_blocks=1 00:08:19.145 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:19.145 ' 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:19.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.145 --rc genhtml_branch_coverage=1 00:08:19.145 --rc genhtml_function_coverage=1 00:08:19.145 --rc genhtml_legend=1 00:08:19.145 --rc geninfo_all_blocks=1 00:08:19.145 --rc geninfo_unexecuted_blocks=1 00:08:19.145 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:19.145 ' 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:19.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.145 --rc genhtml_branch_coverage=1 00:08:19.145 --rc genhtml_function_coverage=1 00:08:19.145 --rc genhtml_legend=1 00:08:19.145 --rc geninfo_all_blocks=1 00:08:19.145 --rc geninfo_unexecuted_blocks=1 00:08:19.145 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:19.145 ' 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:08:19.145 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:19.146 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:19.408 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:19.408 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:19.408 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:19.408 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:19.408 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:19.408 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:19.408 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:19.408 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:19.408 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:19.408 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:19.408 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:19.408 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:19.408 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:19.408 #define SPDK_CONFIG_H 00:08:19.408 #define SPDK_CONFIG_AIO_FSDEV 1 00:08:19.408 #define SPDK_CONFIG_APPS 1 00:08:19.408 #define SPDK_CONFIG_ARCH native 00:08:19.408 #undef SPDK_CONFIG_ASAN 00:08:19.408 #undef SPDK_CONFIG_AVAHI 00:08:19.408 #undef SPDK_CONFIG_CET 00:08:19.408 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:08:19.408 #define SPDK_CONFIG_COVERAGE 1 00:08:19.408 #define SPDK_CONFIG_CROSS_PREFIX 00:08:19.408 #undef SPDK_CONFIG_CRYPTO 00:08:19.408 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:19.408 #undef SPDK_CONFIG_CUSTOMOCF 00:08:19.408 #undef SPDK_CONFIG_DAOS 00:08:19.408 #define SPDK_CONFIG_DAOS_DIR 00:08:19.408 #define SPDK_CONFIG_DEBUG 1 00:08:19.408 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:19.408 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:19.408 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:19.408 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:19.408 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:19.408 #undef SPDK_CONFIG_DPDK_UADK 00:08:19.408 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:19.408 #define SPDK_CONFIG_EXAMPLES 1 00:08:19.408 #undef SPDK_CONFIG_FC 00:08:19.408 #define SPDK_CONFIG_FC_PATH 00:08:19.408 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:19.408 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:19.408 #define SPDK_CONFIG_FSDEV 1 00:08:19.408 #undef SPDK_CONFIG_FUSE 00:08:19.408 #define SPDK_CONFIG_FUZZER 1 00:08:19.408 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:19.408 #undef SPDK_CONFIG_GOLANG 00:08:19.408 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:19.408 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:19.408 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:19.408 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:19.408 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:19.408 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:19.408 #undef SPDK_CONFIG_HAVE_LZ4 00:08:19.408 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:08:19.408 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:08:19.408 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:19.408 #define SPDK_CONFIG_IDXD 1 00:08:19.408 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:19.408 #undef SPDK_CONFIG_IPSEC_MB 00:08:19.408 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:19.408 #define SPDK_CONFIG_ISAL 1 00:08:19.409 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:19.409 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:19.409 #define SPDK_CONFIG_LIBDIR 00:08:19.409 #undef SPDK_CONFIG_LTO 00:08:19.409 #define SPDK_CONFIG_MAX_LCORES 128 00:08:19.409 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:08:19.409 #define SPDK_CONFIG_NVME_CUSE 1 00:08:19.409 #undef SPDK_CONFIG_OCF 00:08:19.409 #define SPDK_CONFIG_OCF_PATH 00:08:19.409 #define SPDK_CONFIG_OPENSSL_PATH 00:08:19.409 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:19.409 #define SPDK_CONFIG_PGO_DIR 00:08:19.409 #undef SPDK_CONFIG_PGO_USE 00:08:19.409 #define SPDK_CONFIG_PREFIX /usr/local 00:08:19.409 #undef SPDK_CONFIG_RAID5F 00:08:19.409 #undef SPDK_CONFIG_RBD 00:08:19.409 #define SPDK_CONFIG_RDMA 1 00:08:19.409 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:19.409 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:19.409 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:19.409 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:19.409 #undef SPDK_CONFIG_SHARED 00:08:19.409 #undef SPDK_CONFIG_SMA 00:08:19.409 #define SPDK_CONFIG_TESTS 1 00:08:19.409 #undef SPDK_CONFIG_TSAN 00:08:19.409 #define SPDK_CONFIG_UBLK 1 00:08:19.409 #define SPDK_CONFIG_UBSAN 1 00:08:19.409 #undef SPDK_CONFIG_UNIT_TESTS 00:08:19.409 #undef SPDK_CONFIG_URING 00:08:19.409 #define SPDK_CONFIG_URING_PATH 00:08:19.409 #undef SPDK_CONFIG_URING_ZNS 00:08:19.409 #undef SPDK_CONFIG_USDT 00:08:19.409 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:19.409 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:19.409 #define SPDK_CONFIG_VFIO_USER 1 00:08:19.409 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:19.409 #define SPDK_CONFIG_VHOST 1 00:08:19.409 #define SPDK_CONFIG_VIRTIO 1 00:08:19.409 #undef SPDK_CONFIG_VTUNE 00:08:19.409 #define SPDK_CONFIG_VTUNE_DIR 00:08:19.409 #define SPDK_CONFIG_WERROR 1 00:08:19.409 #define SPDK_CONFIG_WPDK_DIR 00:08:19.409 #undef SPDK_CONFIG_XNVME 00:08:19.409 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:08:19.409 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:19.410 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j72 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 678623 ]] 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 678623 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.tf0h2l 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.tf0h2l/tests/vfio /tmp/spdk.tf0h2l 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=86450212864 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=94500274176 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=8050061312 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=47246708736 00:08:19.411 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=47250137088 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=18893950976 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=18900058112 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=6107136 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=47249653760 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=47250137088 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=483328 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=9450012672 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=9450024960 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:08:19.412 * Looking for test storage... 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=86450212864 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=10264653824 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:19.412 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # set -o errtrace 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1685 -- # true 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1687 -- # xtrace_fd 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:19.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.412 --rc genhtml_branch_coverage=1 00:08:19.412 --rc genhtml_function_coverage=1 00:08:19.412 --rc genhtml_legend=1 00:08:19.412 --rc geninfo_all_blocks=1 00:08:19.412 --rc geninfo_unexecuted_blocks=1 00:08:19.412 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:19.412 ' 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:19.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.412 --rc genhtml_branch_coverage=1 00:08:19.412 --rc genhtml_function_coverage=1 00:08:19.412 --rc genhtml_legend=1 00:08:19.412 --rc geninfo_all_blocks=1 00:08:19.412 --rc geninfo_unexecuted_blocks=1 00:08:19.412 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:19.412 ' 00:08:19.412 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:19.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.413 --rc genhtml_branch_coverage=1 00:08:19.413 --rc genhtml_function_coverage=1 00:08:19.413 --rc genhtml_legend=1 00:08:19.413 --rc geninfo_all_blocks=1 00:08:19.413 --rc geninfo_unexecuted_blocks=1 00:08:19.413 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:19.413 ' 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:19.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.413 --rc genhtml_branch_coverage=1 00:08:19.413 --rc genhtml_function_coverage=1 00:08:19.413 --rc genhtml_legend=1 00:08:19.413 --rc geninfo_all_blocks=1 00:08:19.413 --rc geninfo_unexecuted_blocks=1 00:08:19.413 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:19.413 ' 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:08:19.413 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:19.413 12:29:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:08:19.673 [2024-11-15 12:29:59.773115] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:19.673 [2024-11-15 12:29:59.773202] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid678730 ] 00:08:19.673 [2024-11-15 12:29:59.870049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.673 [2024-11-15 12:29:59.917592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.932 INFO: Running with entropic power schedule (0xFF, 100). 00:08:19.932 INFO: Seed: 1246452277 00:08:19.932 INFO: Loaded 1 modules (384895 inline 8-bit counters): 384895 [0x2c0204c, 0x2c5ffcb), 00:08:19.932 INFO: Loaded 1 PC tables (384895 PCs): 384895 [0x2c5ffd0,0x323f7c0), 00:08:19.932 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:19.932 INFO: A corpus is not provided, starting from an empty corpus 00:08:19.932 #2 INITED exec/s: 0 rss: 67Mb 00:08:19.932 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:19.932 This may also happen if the target rejected all inputs we tried so far 00:08:19.932 [2024-11-15 12:30:00.179467] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:08:20.450 NEW_FUNC[1/670]: 0x43b5e8 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:08:20.450 NEW_FUNC[2/670]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:20.450 #21 NEW cov: 11158 ft: 11139 corp: 2/7b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 4 CrossOver-ChangeBinInt-InsertRepeatedBytes-InsertByte- 00:08:20.450 NEW_FUNC[1/2]: 0x12fe9c8 in spdk_nvmf_request_complete /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:4768 00:08:20.450 NEW_FUNC[2/2]: 0x12fed98 in spdk_thread_exec_msg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/thread.h:546 00:08:20.450 #27 NEW cov: 11184 ft: 13590 corp: 3/13b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:20.708 #28 NEW cov: 11184 ft: 14477 corp: 4/19b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 ChangeBinInt- 00:08:20.708 NEW_FUNC[1/2]: 0x1363d98 in nvmf_prop_get_acq /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1411 00:08:20.709 NEW_FUNC[2/2]: 0x1c01538 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:20.709 #42 NEW cov: 11223 ft: 14588 corp: 5/25b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 4 EraseBytes-InsertByte-ChangeASCIIInt-CopyPart- 00:08:20.967 #43 NEW cov: 11223 ft: 15037 corp: 6/31b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 ChangeBit- 00:08:20.967 #44 NEW cov: 11223 ft: 15402 corp: 7/37b lim: 6 exec/s: 44 rss: 75Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:21.227 #55 NEW cov: 11223 ft: 15715 corp: 8/43b lim: 6 exec/s: 55 rss: 75Mb L: 6/6 MS: 1 ChangeBinInt- 00:08:21.227 #56 NEW cov: 11223 ft: 15823 corp: 9/49b lim: 6 exec/s: 56 rss: 75Mb L: 6/6 MS: 1 ChangeBit- 00:08:21.485 #57 NEW cov: 11223 ft: 16040 corp: 10/55b lim: 6 exec/s: 57 rss: 75Mb L: 6/6 MS: 1 CopyPart- 00:08:21.485 #58 NEW cov: 11223 ft: 16066 corp: 11/61b lim: 6 exec/s: 58 rss: 76Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:21.744 #59 NEW cov: 11223 ft: 16769 corp: 12/67b lim: 6 exec/s: 59 rss: 76Mb L: 6/6 MS: 1 CrossOver- 00:08:21.744 #60 NEW cov: 11230 ft: 17508 corp: 13/73b lim: 6 exec/s: 60 rss: 76Mb L: 6/6 MS: 1 CopyPart- 00:08:21.744 #61 NEW cov: 11230 ft: 17566 corp: 14/79b lim: 6 exec/s: 61 rss: 76Mb L: 6/6 MS: 1 CMP- DE: "\377\377\377\377"- 00:08:22.003 #62 NEW cov: 11230 ft: 17620 corp: 15/85b lim: 6 exec/s: 31 rss: 76Mb L: 6/6 MS: 1 ChangeByte- 00:08:22.003 #62 DONE cov: 11230 ft: 17620 corp: 15/85b lim: 6 exec/s: 31 rss: 76Mb 00:08:22.003 ###### Recommended dictionary. ###### 00:08:22.003 "\377\377\377\377" # Uses: 0 00:08:22.003 ###### End of recommended dictionary. ###### 00:08:22.003 Done 62 runs in 2 second(s) 00:08:22.003 [2024-11-15 12:30:02.197539] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:08:22.263 12:30:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:08:22.263 12:30:02 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:22.263 12:30:02 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:22.263 12:30:02 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:22.263 12:30:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:08:22.263 12:30:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:22.263 12:30:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:22.263 12:30:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:22.263 12:30:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:08:22.263 12:30:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:08:22.263 12:30:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:08:22.263 12:30:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:08:22.263 12:30:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:22.263 12:30:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:22.263 12:30:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:22.263 12:30:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:08:22.263 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:22.263 12:30:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:22.263 12:30:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:22.263 12:30:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:08:22.263 [2024-11-15 12:30:02.489736] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:22.263 [2024-11-15 12:30:02.489824] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid679217 ] 00:08:22.263 [2024-11-15 12:30:02.588417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.523 [2024-11-15 12:30:02.640608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.523 INFO: Running with entropic power schedule (0xFF, 100). 00:08:22.523 INFO: Seed: 3967453616 00:08:22.781 INFO: Loaded 1 modules (384895 inline 8-bit counters): 384895 [0x2c0204c, 0x2c5ffcb), 00:08:22.781 INFO: Loaded 1 PC tables (384895 PCs): 384895 [0x2c5ffd0,0x323f7c0), 00:08:22.781 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:22.781 INFO: A corpus is not provided, starting from an empty corpus 00:08:22.781 #2 INITED exec/s: 0 rss: 67Mb 00:08:22.781 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:22.781 This may also happen if the target rejected all inputs we tried so far 00:08:22.781 [2024-11-15 12:30:02.892493] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:08:22.781 [2024-11-15 12:30:02.943344] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:22.781 [2024-11-15 12:30:02.943379] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:22.781 [2024-11-15 12:30:02.943412] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:23.040 NEW_FUNC[1/673]: 0x43bb88 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:08:23.040 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:23.040 #5 NEW cov: 11161 ft: 11095 corp: 2/5b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 3 InsertByte-InsertByte-CopyPart- 00:08:23.298 [2024-11-15 12:30:03.435049] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:23.298 [2024-11-15 12:30:03.435087] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:23.298 [2024-11-15 12:30:03.435122] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:23.298 NEW_FUNC[1/1]: 0x20a97b8 in spdk_bit_array_get /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/util/bit_array.c:152 00:08:23.298 #16 NEW cov: 11180 ft: 14093 corp: 3/9b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:08:23.298 [2024-11-15 12:30:03.624836] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:23.298 [2024-11-15 12:30:03.624861] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:23.298 [2024-11-15 12:30:03.624879] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:23.557 NEW_FUNC[1/1]: 0x1c01538 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:23.558 #18 NEW cov: 11197 ft: 15110 corp: 4/13b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 2 EraseBytes-CrossOver- 00:08:23.558 [2024-11-15 12:30:03.822575] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:23.558 [2024-11-15 12:30:03.822603] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:23.558 [2024-11-15 12:30:03.822620] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:23.816 #19 NEW cov: 11197 ft: 16327 corp: 5/17b lim: 4 exec/s: 19 rss: 75Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:23.816 [2024-11-15 12:30:04.016568] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:23.816 [2024-11-15 12:30:04.016593] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:23.816 [2024-11-15 12:30:04.016627] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:23.816 #20 NEW cov: 11197 ft: 16493 corp: 6/21b lim: 4 exec/s: 20 rss: 75Mb L: 4/4 MS: 1 ChangeBinInt- 00:08:24.075 [2024-11-15 12:30:04.197427] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:24.075 [2024-11-15 12:30:04.197451] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:24.075 [2024-11-15 12:30:04.197468] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:24.075 #24 NEW cov: 11197 ft: 16563 corp: 7/25b lim: 4 exec/s: 24 rss: 75Mb L: 4/4 MS: 4 EraseBytes-CopyPart-CrossOver-InsertByte- 00:08:24.075 [2024-11-15 12:30:04.376212] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:24.075 [2024-11-15 12:30:04.376234] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:24.075 [2024-11-15 12:30:04.376252] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:24.334 #25 NEW cov: 11197 ft: 16950 corp: 8/29b lim: 4 exec/s: 25 rss: 75Mb L: 4/4 MS: 1 ChangeBinInt- 00:08:24.335 [2024-11-15 12:30:04.558045] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:24.335 [2024-11-15 12:30:04.558069] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:24.335 [2024-11-15 12:30:04.558087] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:24.335 #26 NEW cov: 11204 ft: 17270 corp: 9/33b lim: 4 exec/s: 26 rss: 75Mb L: 4/4 MS: 1 ChangeBinInt- 00:08:24.605 [2024-11-15 12:30:04.736242] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:24.605 [2024-11-15 12:30:04.736265] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:24.605 [2024-11-15 12:30:04.736282] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:24.605 #27 NEW cov: 11204 ft: 17618 corp: 10/37b lim: 4 exec/s: 27 rss: 75Mb L: 4/4 MS: 1 ChangeBit- 00:08:24.605 [2024-11-15 12:30:04.914691] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:24.605 [2024-11-15 12:30:04.914714] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:24.605 [2024-11-15 12:30:04.914733] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:24.863 #30 NEW cov: 11204 ft: 17713 corp: 11/41b lim: 4 exec/s: 15 rss: 75Mb L: 4/4 MS: 3 CopyPart-ShuffleBytes-CrossOver- 00:08:24.863 #30 DONE cov: 11204 ft: 17713 corp: 11/41b lim: 4 exec/s: 15 rss: 75Mb 00:08:24.863 Done 30 runs in 2 second(s) 00:08:24.863 [2024-11-15 12:30:05.054537] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:08:25.122 12:30:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:08:25.122 12:30:05 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:25.122 12:30:05 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:25.122 12:30:05 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:25.122 12:30:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:08:25.122 12:30:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:25.122 12:30:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:25.122 12:30:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:25.122 12:30:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:08:25.122 12:30:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:08:25.122 12:30:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:08:25.122 12:30:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:08:25.122 12:30:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:25.122 12:30:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:25.122 12:30:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:25.122 12:30:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:08:25.122 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:25.122 12:30:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:25.122 12:30:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:25.122 12:30:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:08:25.122 [2024-11-15 12:30:05.342274] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:25.122 [2024-11-15 12:30:05.342350] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid679921 ] 00:08:25.122 [2024-11-15 12:30:05.439687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.381 [2024-11-15 12:30:05.485551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.381 INFO: Running with entropic power schedule (0xFF, 100). 00:08:25.381 INFO: Seed: 2513478634 00:08:25.381 INFO: Loaded 1 modules (384895 inline 8-bit counters): 384895 [0x2c0204c, 0x2c5ffcb), 00:08:25.381 INFO: Loaded 1 PC tables (384895 PCs): 384895 [0x2c5ffd0,0x323f7c0), 00:08:25.381 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:25.381 INFO: A corpus is not provided, starting from an empty corpus 00:08:25.381 #2 INITED exec/s: 0 rss: 68Mb 00:08:25.381 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:25.381 This may also happen if the target rejected all inputs we tried so far 00:08:25.640 [2024-11-15 12:30:05.735341] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:08:25.640 [2024-11-15 12:30:05.791433] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:25.899 NEW_FUNC[1/673]: 0x43c578 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:08:25.899 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:25.899 #5 NEW cov: 11144 ft: 11113 corp: 2/9b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 3 InsertRepeatedBytes-ChangeBinInt-CopyPart- 00:08:26.158 [2024-11-15 12:30:06.261335] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:26.158 #8 NEW cov: 11160 ft: 14525 corp: 3/17b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 3 EraseBytes-ChangeByte-InsertByte- 00:08:26.158 [2024-11-15 12:30:06.449802] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:26.417 NEW_FUNC[1/1]: 0x1c01538 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:26.417 #9 NEW cov: 11177 ft: 15729 corp: 4/25b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:08:26.417 [2024-11-15 12:30:06.638792] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:26.417 #10 NEW cov: 11177 ft: 16120 corp: 5/33b lim: 8 exec/s: 10 rss: 75Mb L: 8/8 MS: 1 CrossOver- 00:08:26.676 [2024-11-15 12:30:06.805206] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:26.676 #11 NEW cov: 11177 ft: 16658 corp: 6/41b lim: 8 exec/s: 11 rss: 75Mb L: 8/8 MS: 1 CrossOver- 00:08:26.676 [2024-11-15 12:30:06.963262] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:26.935 #12 NEW cov: 11180 ft: 17005 corp: 7/49b lim: 8 exec/s: 12 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:08:26.935 [2024-11-15 12:30:07.121610] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:26.935 #13 NEW cov: 11180 ft: 17235 corp: 8/57b lim: 8 exec/s: 13 rss: 75Mb L: 8/8 MS: 1 CrossOver- 00:08:27.194 [2024-11-15 12:30:07.280259] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:27.194 #14 NEW cov: 11180 ft: 17394 corp: 9/65b lim: 8 exec/s: 14 rss: 75Mb L: 8/8 MS: 1 ChangeBit- 00:08:27.194 [2024-11-15 12:30:07.440439] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:27.453 #16 NEW cov: 11187 ft: 17693 corp: 10/73b lim: 8 exec/s: 16 rss: 75Mb L: 8/8 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:27.453 [2024-11-15 12:30:07.610765] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:27.453 #17 NEW cov: 11187 ft: 17721 corp: 11/81b lim: 8 exec/s: 17 rss: 75Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:27.453 [2024-11-15 12:30:07.770951] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:27.712 #18 NEW cov: 11187 ft: 17763 corp: 12/89b lim: 8 exec/s: 9 rss: 75Mb L: 8/8 MS: 1 CrossOver- 00:08:27.712 #18 DONE cov: 11187 ft: 17763 corp: 12/89b lim: 8 exec/s: 9 rss: 75Mb 00:08:27.712 Done 18 runs in 2 second(s) 00:08:27.712 [2024-11-15 12:30:07.886537] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:08:27.972 12:30:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:08:27.972 12:30:08 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:27.972 12:30:08 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:27.972 12:30:08 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:27.972 12:30:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:08:27.972 12:30:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:27.972 12:30:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:27.972 12:30:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:27.972 12:30:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:08:27.972 12:30:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:08:27.972 12:30:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:08:27.972 12:30:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:08:27.972 12:30:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:27.972 12:30:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:27.972 12:30:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:27.972 12:30:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:08:27.972 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:27.972 12:30:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:27.972 12:30:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:27.972 12:30:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:08:27.972 [2024-11-15 12:30:08.154695] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:27.972 [2024-11-15 12:30:08.154766] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid680443 ] 00:08:27.972 [2024-11-15 12:30:08.249600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.972 [2024-11-15 12:30:08.296230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.231 INFO: Running with entropic power schedule (0xFF, 100). 00:08:28.231 INFO: Seed: 1036528593 00:08:28.231 INFO: Loaded 1 modules (384895 inline 8-bit counters): 384895 [0x2c0204c, 0x2c5ffcb), 00:08:28.231 INFO: Loaded 1 PC tables (384895 PCs): 384895 [0x2c5ffd0,0x323f7c0), 00:08:28.231 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:28.231 INFO: A corpus is not provided, starting from an empty corpus 00:08:28.231 #2 INITED exec/s: 0 rss: 68Mb 00:08:28.231 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:28.231 This may also happen if the target rejected all inputs we tried so far 00:08:28.231 [2024-11-15 12:30:08.552413] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:08:28.749 NEW_FUNC[1/673]: 0x43cc68 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:08:28.749 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:28.749 #217 NEW cov: 11157 ft: 11068 corp: 2/33b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 5 InsertByte-ChangeBinInt-InsertRepeatedBytes-ShuffleBytes-InsertRepeatedBytes- 00:08:29.007 #218 NEW cov: 11171 ft: 14061 corp: 3/65b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:08:29.266 NEW_FUNC[1/1]: 0x1c01538 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:29.266 #219 NEW cov: 11188 ft: 14508 corp: 4/97b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:08:29.266 #220 NEW cov: 11188 ft: 15894 corp: 5/129b lim: 32 exec/s: 220 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:29.524 #226 NEW cov: 11188 ft: 16485 corp: 6/161b lim: 32 exec/s: 226 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:08:29.783 #227 NEW cov: 11188 ft: 16900 corp: 7/193b lim: 32 exec/s: 227 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:08:29.783 #228 NEW cov: 11188 ft: 17026 corp: 8/225b lim: 32 exec/s: 228 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:08:30.042 #229 NEW cov: 11188 ft: 17415 corp: 9/257b lim: 32 exec/s: 229 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:30.301 #235 NEW cov: 11195 ft: 17444 corp: 10/289b lim: 32 exec/s: 235 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:08:30.301 #236 NEW cov: 11195 ft: 17456 corp: 11/321b lim: 32 exec/s: 118 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:08:30.301 #236 DONE cov: 11195 ft: 17456 corp: 11/321b lim: 32 exec/s: 118 rss: 76Mb 00:08:30.301 Done 236 runs in 2 second(s) 00:08:30.302 [2024-11-15 12:30:10.637539] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:08:30.560 12:30:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:08:30.560 12:30:10 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:30.560 12:30:10 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:30.561 12:30:10 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:30.561 12:30:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:30.561 12:30:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:30.561 12:30:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:30.561 12:30:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:30.561 12:30:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:30.561 12:30:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:30.561 12:30:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:30.561 12:30:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:30.561 12:30:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:30.561 12:30:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:30.561 12:30:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:30.561 12:30:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:30.561 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:30.561 12:30:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:30.561 12:30:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:30.561 12:30:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:30.819 [2024-11-15 12:30:10.928326] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:30.819 [2024-11-15 12:30:10.928409] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid680803 ] 00:08:30.819 [2024-11-15 12:30:11.021679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.819 [2024-11-15 12:30:11.067664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.078 INFO: Running with entropic power schedule (0xFF, 100). 00:08:31.078 INFO: Seed: 3802509494 00:08:31.078 INFO: Loaded 1 modules (384895 inline 8-bit counters): 384895 [0x2c0204c, 0x2c5ffcb), 00:08:31.078 INFO: Loaded 1 PC tables (384895 PCs): 384895 [0x2c5ffd0,0x323f7c0), 00:08:31.078 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:31.078 INFO: A corpus is not provided, starting from an empty corpus 00:08:31.078 #2 INITED exec/s: 0 rss: 68Mb 00:08:31.078 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:31.078 This may also happen if the target rejected all inputs we tried so far 00:08:31.078 [2024-11-15 12:30:11.319066] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:08:31.595 NEW_FUNC[1/673]: 0x43d4e8 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:08:31.595 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:31.595 #36 NEW cov: 11158 ft: 11112 corp: 2/33b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 4 CMP-InsertByte-EraseBytes-InsertRepeatedBytes- DE: "\001\000\000\000\000\000\000\000"- 00:08:31.595 [2024-11-15 12:30:11.858180] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 2814749777238682 > max 8796093022208 00:08:31.595 [2024-11-15 12:30:11.858219] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x9a9a9a9a9a9a9a9a, 0x9aa49a9a9b353534) offset=0x290a000000000000 flags=0x3: No space left on device 00:08:31.595 [2024-11-15 12:30:11.858232] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:08:31.595 [2024-11-15 12:30:11.858266] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:31.595 [2024-11-15 12:30:11.859192] vfio_user.c:3108:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x9a9a9a9a9a9a9a9a, 0x9aa49a9a9b353534) flags=0: No such file or directory 00:08:31.595 [2024-11-15 12:30:11.859212] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:31.595 [2024-11-15 12:30:11.859228] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:31.851 NEW_FUNC[1/1]: 0x1581498 in vfio_user_log /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:3098 00:08:31.851 #40 NEW cov: 11186 ft: 14256 corp: 3/65b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 4 InsertByte-InsertRepeatedBytes-ChangeBinInt-InsertRepeatedBytes- 00:08:31.851 [2024-11-15 12:30:12.061139] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 2814749777238682 > max 8796093022208 00:08:31.851 [2024-11-15 12:30:12.061165] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x9a9a9a9a9a9a9a9a, 0x9aa49a9a9b353534) offset=0x290a000000000000 flags=0x3: No space left on device 00:08:31.851 [2024-11-15 12:30:12.061176] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:08:31.851 [2024-11-15 12:30:12.061209] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:31.851 [2024-11-15 12:30:12.062114] vfio_user.c:3108:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x9a9a9a9a9a9a9a9a, 0x9aa49a9a9b353534) flags=0: No such file or directory 00:08:31.851 [2024-11-15 12:30:12.062134] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:31.851 [2024-11-15 12:30:12.062150] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:31.851 NEW_FUNC[1/1]: 0x1c01538 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:31.851 #41 NEW cov: 11203 ft: 15474 corp: 4/97b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:32.110 [2024-11-15 12:30:12.249941] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 2814749777238682 > max 8796093022208 00:08:32.110 [2024-11-15 12:30:12.249966] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x9a9a9a9a9a9a9a9a, 0x9aa49a9a9b353534) offset=0x290a000000000a00 flags=0x3: No space left on device 00:08:32.110 [2024-11-15 12:30:12.249978] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:08:32.110 [2024-11-15 12:30:12.249995] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:32.110 [2024-11-15 12:30:12.250943] vfio_user.c:3108:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x9a9a9a9a9a9a9a9a, 0x9aa49a9a9b353534) flags=0: No such file or directory 00:08:32.110 [2024-11-15 12:30:12.250963] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:32.110 [2024-11-15 12:30:12.250980] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:32.110 #42 NEW cov: 11203 ft: 15882 corp: 5/129b lim: 32 exec/s: 42 rss: 75Mb L: 32/32 MS: 1 CrossOver- 00:08:32.369 #43 NEW cov: 11203 ft: 17134 corp: 6/161b lim: 32 exec/s: 43 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:32.369 [2024-11-15 12:30:12.618445] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0x9a9a9a9a9a9a, 0x9aa49a9aa49a) fd=302 offset=0x290a000000000a00 prot=0x3: Permission denied 00:08:32.369 [2024-11-15 12:30:12.618469] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x9a9a9a9a9a9a, 0x9aa49a9aa49a) offset=0x290a000000000a00 flags=0x3: Permission denied 00:08:32.369 [2024-11-15 12:30:12.618480] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Permission denied 00:08:32.369 [2024-11-15 12:30:12.618497] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:32.369 [2024-11-15 12:30:12.619470] vfio_user.c:3108:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x9a9a9a9a9a9a, 0x9aa49a9aa49a) flags=0: No such file or directory 00:08:32.369 [2024-11-15 12:30:12.619495] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:32.369 [2024-11-15 12:30:12.619513] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:32.627 #44 NEW cov: 11203 ft: 17407 corp: 7/193b lim: 32 exec/s: 44 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:08:32.627 [2024-11-15 12:30:12.799803] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0x9a9a9a9a9a9a, 0x9a9a9a9aa49a) fd=302 offset=0x290a000000000a00 prot=0x3: Permission denied 00:08:32.627 [2024-11-15 12:30:12.799826] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x9a9a9a9a9a9a, 0x9a9a9a9aa49a) offset=0x290a000000000a00 flags=0x3: Permission denied 00:08:32.627 [2024-11-15 12:30:12.799837] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Permission denied 00:08:32.627 [2024-11-15 12:30:12.799854] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:32.627 [2024-11-15 12:30:12.800811] vfio_user.c:3108:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x9a9a9a9a9a9a, 0x9a9a9a9aa49a) flags=0: No such file or directory 00:08:32.627 [2024-11-15 12:30:12.800831] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:32.627 [2024-11-15 12:30:12.800848] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:32.627 #45 NEW cov: 11203 ft: 17540 corp: 8/225b lim: 32 exec/s: 45 rss: 75Mb L: 32/32 MS: 1 CrossOver- 00:08:32.886 [2024-11-15 12:30:12.977475] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=302 offset=0 prot=0x3: Invalid argument 00:08:32.886 [2024-11-15 12:30:12.977498] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:08:32.886 [2024-11-15 12:30:12.977509] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:32.886 [2024-11-15 12:30:12.977525] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:32.886 [2024-11-15 12:30:12.978474] vfio_user.c:3108:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:08:32.886 [2024-11-15 12:30:12.978494] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:32.886 [2024-11-15 12:30:12.978511] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:32.886 #53 NEW cov: 11210 ft: 17647 corp: 9/257b lim: 32 exec/s: 53 rss: 75Mb L: 32/32 MS: 3 CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:08:33.145 #54 NEW cov: 11210 ft: 17694 corp: 10/289b lim: 32 exec/s: 54 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:33.145 [2024-11-15 12:30:13.332638] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 2814749777238682 > max 8796093022208 00:08:33.145 [2024-11-15 12:30:13.332662] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x9a9a9a9a9a9a9a9a, 0x9aa49a9a9b353534) offset=0x290a008000000a00 flags=0x3: No space left on device 00:08:33.145 [2024-11-15 12:30:13.332674] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:08:33.145 [2024-11-15 12:30:13.332691] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:33.145 [2024-11-15 12:30:13.333662] vfio_user.c:3108:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x9a9a9a9a9a9a9a9a, 0x9aa49a9a9b353534) flags=0: No such file or directory 00:08:33.145 [2024-11-15 12:30:13.333681] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:33.145 [2024-11-15 12:30:13.333699] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:33.145 #55 NEW cov: 11210 ft: 17928 corp: 11/321b lim: 32 exec/s: 27 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:08:33.145 #55 DONE cov: 11210 ft: 17928 corp: 11/321b lim: 32 exec/s: 27 rss: 76Mb 00:08:33.145 ###### Recommended dictionary. ###### 00:08:33.145 "\001\000\000\000\000\000\000\000" # Uses: 0 00:08:33.145 ###### End of recommended dictionary. ###### 00:08:33.145 Done 55 runs in 2 second(s) 00:08:33.145 [2024-11-15 12:30:13.465529] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:08:33.404 12:30:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:08:33.404 12:30:13 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:33.404 12:30:13 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:33.404 12:30:13 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:33.404 12:30:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:08:33.404 12:30:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:33.404 12:30:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:33.404 12:30:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:33.404 12:30:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:08:33.404 12:30:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:08:33.404 12:30:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:08:33.404 12:30:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:08:33.404 12:30:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:33.404 12:30:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:33.404 12:30:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:33.404 12:30:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:08:33.404 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:33.404 12:30:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:33.404 12:30:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:33.404 12:30:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:08:33.664 [2024-11-15 12:30:13.755182] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:33.664 [2024-11-15 12:30:13.755251] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid681167 ] 00:08:33.664 [2024-11-15 12:30:13.850292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.664 [2024-11-15 12:30:13.896156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.922 INFO: Running with entropic power schedule (0xFF, 100). 00:08:33.922 INFO: Seed: 2334540727 00:08:33.922 INFO: Loaded 1 modules (384895 inline 8-bit counters): 384895 [0x2c0204c, 0x2c5ffcb), 00:08:33.922 INFO: Loaded 1 PC tables (384895 PCs): 384895 [0x2c5ffd0,0x323f7c0), 00:08:33.922 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:33.922 INFO: A corpus is not provided, starting from an empty corpus 00:08:33.922 #2 INITED exec/s: 0 rss: 67Mb 00:08:33.922 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:33.922 This may also happen if the target rejected all inputs we tried so far 00:08:33.922 [2024-11-15 12:30:14.146260] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:08:33.922 [2024-11-15 12:30:14.194352] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:33.922 [2024-11-15 12:30:14.194393] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:34.441 NEW_FUNC[1/674]: 0x43dee8 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:08:34.441 NEW_FUNC[2/674]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:34.441 #12 NEW cov: 11165 ft: 11117 corp: 2/14b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 5 ChangeByte-ChangeBit-ChangeBit-ChangeBit-InsertRepeatedBytes- 00:08:34.441 [2024-11-15 12:30:14.682341] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:34.441 [2024-11-15 12:30:14.682389] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:34.699 #23 NEW cov: 11182 ft: 14359 corp: 3/27b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeByte- 00:08:34.699 [2024-11-15 12:30:14.864194] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:34.699 [2024-11-15 12:30:14.864226] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:34.699 NEW_FUNC[1/1]: 0x1c01538 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:34.699 #24 NEW cov: 11199 ft: 15498 corp: 4/40b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:34.699 [2024-11-15 12:30:15.041914] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:34.699 [2024-11-15 12:30:15.041944] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:34.958 #30 NEW cov: 11199 ft: 15585 corp: 5/53b lim: 13 exec/s: 30 rss: 75Mb L: 13/13 MS: 1 CrossOver- 00:08:34.958 [2024-11-15 12:30:15.220078] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:34.958 [2024-11-15 12:30:15.220108] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:35.216 #36 NEW cov: 11199 ft: 16092 corp: 6/66b lim: 13 exec/s: 36 rss: 75Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:35.216 [2024-11-15 12:30:15.399188] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:35.216 [2024-11-15 12:30:15.399218] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:35.216 #42 NEW cov: 11199 ft: 16183 corp: 7/79b lim: 13 exec/s: 42 rss: 75Mb L: 13/13 MS: 1 ChangeByte- 00:08:35.475 [2024-11-15 12:30:15.579783] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:35.475 [2024-11-15 12:30:15.579812] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:35.475 #43 NEW cov: 11199 ft: 16245 corp: 8/92b lim: 13 exec/s: 43 rss: 75Mb L: 13/13 MS: 1 ChangeBit- 00:08:35.475 [2024-11-15 12:30:15.762779] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:35.475 [2024-11-15 12:30:15.762809] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:35.733 #44 NEW cov: 11199 ft: 16699 corp: 9/105b lim: 13 exec/s: 44 rss: 75Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:35.733 [2024-11-15 12:30:15.939650] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:35.733 [2024-11-15 12:30:15.939680] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:35.733 #45 NEW cov: 11206 ft: 16751 corp: 10/118b lim: 13 exec/s: 45 rss: 76Mb L: 13/13 MS: 1 InsertRepeatedBytes- 00:08:35.993 [2024-11-15 12:30:16.133446] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:35.993 [2024-11-15 12:30:16.133477] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:35.993 #56 NEW cov: 11206 ft: 16839 corp: 11/131b lim: 13 exec/s: 28 rss: 76Mb L: 13/13 MS: 1 ChangeASCIIInt- 00:08:35.993 #56 DONE cov: 11206 ft: 16839 corp: 11/131b lim: 13 exec/s: 28 rss: 76Mb 00:08:35.993 Done 56 runs in 2 second(s) 00:08:35.993 [2024-11-15 12:30:16.259538] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:08:36.252 12:30:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:08:36.252 12:30:16 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:36.252 12:30:16 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:36.252 12:30:16 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:36.252 12:30:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:08:36.252 12:30:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:36.252 12:30:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:36.252 12:30:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:36.252 12:30:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:08:36.252 12:30:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:08:36.252 12:30:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:08:36.252 12:30:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:08:36.252 12:30:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:36.252 12:30:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:36.252 12:30:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:36.252 12:30:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:08:36.252 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:36.252 12:30:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:36.252 12:30:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:36.252 12:30:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:08:36.252 [2024-11-15 12:30:16.549925] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:36.252 [2024-11-15 12:30:16.549995] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid681521 ] 00:08:36.512 [2024-11-15 12:30:16.645629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.512 [2024-11-15 12:30:16.692681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.771 INFO: Running with entropic power schedule (0xFF, 100). 00:08:36.771 INFO: Seed: 839570369 00:08:36.771 INFO: Loaded 1 modules (384895 inline 8-bit counters): 384895 [0x2c0204c, 0x2c5ffcb), 00:08:36.771 INFO: Loaded 1 PC tables (384895 PCs): 384895 [0x2c5ffd0,0x323f7c0), 00:08:36.771 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:36.771 INFO: A corpus is not provided, starting from an empty corpus 00:08:36.771 #2 INITED exec/s: 0 rss: 67Mb 00:08:36.771 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:36.771 This may also happen if the target rejected all inputs we tried so far 00:08:36.771 [2024-11-15 12:30:16.946130] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:08:36.771 [2024-11-15 12:30:16.998381] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.771 [2024-11-15 12:30:16.998414] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.289 NEW_FUNC[1/674]: 0x43ebd8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:08:37.289 NEW_FUNC[2/674]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:37.289 #3 NEW cov: 11160 ft: 11111 corp: 2/10b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:08:37.289 [2024-11-15 12:30:17.481219] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.289 [2024-11-15 12:30:17.481263] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.289 #4 NEW cov: 11174 ft: 14463 corp: 3/19b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 ChangeByte- 00:08:37.549 [2024-11-15 12:30:17.668725] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.549 [2024-11-15 12:30:17.668760] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.549 NEW_FUNC[1/1]: 0x1c01538 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:37.549 #5 NEW cov: 11191 ft: 16217 corp: 4/28b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 CopyPart- 00:08:37.549 [2024-11-15 12:30:17.860555] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.549 [2024-11-15 12:30:17.860585] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.809 #11 NEW cov: 11191 ft: 16725 corp: 5/37b lim: 9 exec/s: 11 rss: 76Mb L: 9/9 MS: 1 CopyPart- 00:08:37.809 [2024-11-15 12:30:18.034305] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.809 [2024-11-15 12:30:18.034344] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.809 #17 NEW cov: 11191 ft: 17020 corp: 6/46b lim: 9 exec/s: 17 rss: 76Mb L: 9/9 MS: 1 CopyPart- 00:08:38.068 [2024-11-15 12:30:18.207252] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.068 [2024-11-15 12:30:18.207282] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.068 #18 NEW cov: 11191 ft: 17441 corp: 7/55b lim: 9 exec/s: 18 rss: 76Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:38.068 [2024-11-15 12:30:18.380037] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.068 [2024-11-15 12:30:18.380067] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.327 #19 NEW cov: 11191 ft: 17676 corp: 8/64b lim: 9 exec/s: 19 rss: 76Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:38.327 [2024-11-15 12:30:18.554438] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.327 [2024-11-15 12:30:18.554469] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.327 #20 NEW cov: 11191 ft: 18047 corp: 9/73b lim: 9 exec/s: 20 rss: 76Mb L: 9/9 MS: 1 CopyPart- 00:08:38.586 [2024-11-15 12:30:18.742342] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.586 [2024-11-15 12:30:18.742375] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.586 #21 NEW cov: 11198 ft: 18465 corp: 10/82b lim: 9 exec/s: 21 rss: 76Mb L: 9/9 MS: 1 ChangeByte- 00:08:38.586 [2024-11-15 12:30:18.926231] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.586 [2024-11-15 12:30:18.926262] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.845 #22 NEW cov: 11198 ft: 18621 corp: 11/91b lim: 9 exec/s: 11 rss: 76Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:38.845 #22 DONE cov: 11198 ft: 18621 corp: 11/91b lim: 9 exec/s: 11 rss: 76Mb 00:08:38.845 Done 22 runs in 2 second(s) 00:08:38.845 [2024-11-15 12:30:19.049522] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:08:39.105 12:30:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:08:39.105 12:30:19 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:39.105 12:30:19 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:39.105 12:30:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:08:39.105 00:08:39.105 real 0m20.025s 00:08:39.105 user 0m27.688s 00:08:39.105 sys 0m2.083s 00:08:39.105 12:30:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.105 12:30:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:39.105 ************************************ 00:08:39.105 END TEST vfio_llvm_fuzz 00:08:39.105 ************************************ 00:08:39.105 00:08:39.105 real 1m26.111s 00:08:39.105 user 2m7.943s 00:08:39.105 sys 0m11.585s 00:08:39.105 12:30:19 llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.105 12:30:19 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:39.105 ************************************ 00:08:39.105 END TEST llvm_fuzz 00:08:39.105 ************************************ 00:08:39.105 12:30:19 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:08:39.105 12:30:19 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:08:39.105 12:30:19 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:08:39.105 12:30:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.105 12:30:19 -- common/autotest_common.sh@10 -- # set +x 00:08:39.105 12:30:19 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:08:39.105 12:30:19 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:08:39.105 12:30:19 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:08:39.105 12:30:19 -- common/autotest_common.sh@10 -- # set +x 00:08:44.378 INFO: APP EXITING 00:08:44.378 INFO: killing all VMs 00:08:44.378 INFO: killing vhost app 00:08:44.378 INFO: EXIT DONE 00:08:46.915 Waiting for block devices as requested 00:08:46.915 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:08:47.174 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:47.175 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:47.175 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:47.434 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:47.434 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:47.434 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:47.694 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:47.694 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:47.694 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:47.694 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:47.953 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:47.953 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:48.213 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:48.213 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:48.213 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:48.473 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:55.042 Cleaning 00:08:55.042 Removing: /dev/shm/spdk_tgt_trace.pid659154 00:08:55.042 Removing: /var/run/dpdk/spdk_pid656760 00:08:55.042 Removing: /var/run/dpdk/spdk_pid657894 00:08:55.042 Removing: /var/run/dpdk/spdk_pid659154 00:08:55.042 Removing: /var/run/dpdk/spdk_pid659619 00:08:55.042 Removing: /var/run/dpdk/spdk_pid660349 00:08:55.042 Removing: /var/run/dpdk/spdk_pid660530 00:08:55.042 Removing: /var/run/dpdk/spdk_pid661279 00:08:55.042 Removing: /var/run/dpdk/spdk_pid661343 00:08:55.042 Removing: /var/run/dpdk/spdk_pid661783 00:08:55.042 Removing: /var/run/dpdk/spdk_pid662035 00:08:55.042 Removing: /var/run/dpdk/spdk_pid662271 00:08:55.042 Removing: /var/run/dpdk/spdk_pid662519 00:08:55.042 Removing: /var/run/dpdk/spdk_pid662757 00:08:55.042 Removing: /var/run/dpdk/spdk_pid662935 00:08:55.042 Removing: /var/run/dpdk/spdk_pid663084 00:08:55.042 Removing: /var/run/dpdk/spdk_pid663366 00:08:55.042 Removing: /var/run/dpdk/spdk_pid663970 00:08:55.042 Removing: /var/run/dpdk/spdk_pid666318 00:08:55.042 Removing: /var/run/dpdk/spdk_pid666526 00:08:55.042 Removing: /var/run/dpdk/spdk_pid666726 00:08:55.042 Removing: /var/run/dpdk/spdk_pid666843 00:08:55.042 Removing: /var/run/dpdk/spdk_pid667286 00:08:55.042 Removing: /var/run/dpdk/spdk_pid667291 00:08:55.042 Removing: /var/run/dpdk/spdk_pid667682 00:08:55.042 Removing: /var/run/dpdk/spdk_pid667821 00:08:55.042 Removing: /var/run/dpdk/spdk_pid668056 00:08:55.042 Removing: /var/run/dpdk/spdk_pid668074 00:08:55.042 Removing: /var/run/dpdk/spdk_pid668280 00:08:55.042 Removing: /var/run/dpdk/spdk_pid668285 00:08:55.042 Removing: /var/run/dpdk/spdk_pid668738 00:08:55.042 Removing: /var/run/dpdk/spdk_pid668937 00:08:55.042 Removing: /var/run/dpdk/spdk_pid669132 00:08:55.042 Removing: /var/run/dpdk/spdk_pid669374 00:08:55.042 Removing: /var/run/dpdk/spdk_pid669947 00:08:55.042 Removing: /var/run/dpdk/spdk_pid670304 00:08:55.042 Removing: /var/run/dpdk/spdk_pid670665 00:08:55.042 Removing: /var/run/dpdk/spdk_pid671019 00:08:55.042 Removing: /var/run/dpdk/spdk_pid671368 00:08:55.042 Removing: /var/run/dpdk/spdk_pid671721 00:08:55.042 Removing: /var/run/dpdk/spdk_pid672074 00:08:55.042 Removing: /var/run/dpdk/spdk_pid672407 00:08:55.042 Removing: /var/run/dpdk/spdk_pid672768 00:08:55.042 Removing: /var/run/dpdk/spdk_pid673110 00:08:55.043 Removing: /var/run/dpdk/spdk_pid673491 00:08:55.043 Removing: /var/run/dpdk/spdk_pid673866 00:08:55.043 Removing: /var/run/dpdk/spdk_pid674203 00:08:55.043 Removing: /var/run/dpdk/spdk_pid674469 00:08:55.043 Removing: /var/run/dpdk/spdk_pid674813 00:08:55.043 Removing: /var/run/dpdk/spdk_pid675175 00:08:55.043 Removing: /var/run/dpdk/spdk_pid675529 00:08:55.043 Removing: /var/run/dpdk/spdk_pid675888 00:08:55.043 Removing: /var/run/dpdk/spdk_pid676242 00:08:55.043 Removing: /var/run/dpdk/spdk_pid676606 00:08:55.043 Removing: /var/run/dpdk/spdk_pid676962 00:08:55.043 Removing: /var/run/dpdk/spdk_pid677280 00:08:55.043 Removing: /var/run/dpdk/spdk_pid677566 00:08:55.043 Removing: /var/run/dpdk/spdk_pid677879 00:08:55.043 Removing: /var/run/dpdk/spdk_pid678230 00:08:55.043 Removing: /var/run/dpdk/spdk_pid678730 00:08:55.043 Removing: /var/run/dpdk/spdk_pid679217 00:08:55.043 Removing: /var/run/dpdk/spdk_pid679921 00:08:55.043 Removing: /var/run/dpdk/spdk_pid680443 00:08:55.043 Removing: /var/run/dpdk/spdk_pid680803 00:08:55.043 Removing: /var/run/dpdk/spdk_pid681167 00:08:55.043 Removing: /var/run/dpdk/spdk_pid681521 00:08:55.043 Clean 00:08:55.043 12:30:34 -- common/autotest_common.sh@1453 -- # return 0 00:08:55.043 12:30:34 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:08:55.043 12:30:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:55.043 12:30:34 -- common/autotest_common.sh@10 -- # set +x 00:08:55.043 12:30:34 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:08:55.043 12:30:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:55.043 12:30:34 -- common/autotest_common.sh@10 -- # set +x 00:08:55.043 12:30:34 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:55.043 12:30:34 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:55.043 12:30:34 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:55.043 12:30:34 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:08:55.043 12:30:34 -- spdk/autotest.sh@398 -- # hostname 00:08:55.043 12:30:34 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-39 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:08:55.043 geninfo: WARNING: invalid characters removed from testname! 00:08:58.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:09:03.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:09:06.897 12:30:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:15.022 12:30:54 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:20.299 12:31:00 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:25.579 12:31:05 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:30.851 12:31:10 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:36.130 12:31:16 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:41.415 12:31:21 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:09:41.415 12:31:21 -- spdk/autorun.sh@1 -- $ timing_finish 00:09:41.415 12:31:21 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt ]] 00:09:41.415 12:31:21 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:09:41.415 12:31:21 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:09:41.415 12:31:21 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:41.674 + [[ -n 545568 ]] 00:09:41.674 + sudo kill 545568 00:09:41.685 [Pipeline] } 00:09:41.702 [Pipeline] // stage 00:09:41.709 [Pipeline] } 00:09:41.723 [Pipeline] // timeout 00:09:41.728 [Pipeline] } 00:09:41.741 [Pipeline] // catchError 00:09:41.746 [Pipeline] } 00:09:41.760 [Pipeline] // wrap 00:09:41.766 [Pipeline] } 00:09:41.779 [Pipeline] // catchError 00:09:41.787 [Pipeline] stage 00:09:41.789 [Pipeline] { (Epilogue) 00:09:41.801 [Pipeline] catchError 00:09:41.803 [Pipeline] { 00:09:41.814 [Pipeline] echo 00:09:41.816 Cleanup processes 00:09:41.821 [Pipeline] sh 00:09:42.109 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:42.109 688671 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:42.123 [Pipeline] sh 00:09:42.410 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:42.410 ++ grep -v 'sudo pgrep' 00:09:42.410 ++ awk '{print $1}' 00:09:42.410 + sudo kill -9 00:09:42.410 + true 00:09:42.423 [Pipeline] sh 00:09:42.710 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:09:54.940 [Pipeline] sh 00:09:55.395 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:09:55.396 Artifacts sizes are good 00:09:55.428 [Pipeline] archiveArtifacts 00:09:55.435 Archiving artifacts 00:09:55.567 [Pipeline] sh 00:09:55.852 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:09:55.868 [Pipeline] cleanWs 00:09:55.878 [WS-CLEANUP] Deleting project workspace... 00:09:55.878 [WS-CLEANUP] Deferred wipeout is used... 00:09:55.885 [WS-CLEANUP] done 00:09:55.887 [Pipeline] } 00:09:55.904 [Pipeline] // catchError 00:09:55.917 [Pipeline] sh 00:09:56.201 + logger -p user.info -t JENKINS-CI 00:09:56.210 [Pipeline] } 00:09:56.223 [Pipeline] // stage 00:09:56.227 [Pipeline] } 00:09:56.243 [Pipeline] // node 00:09:56.250 [Pipeline] End of Pipeline 00:09:56.291 Finished: SUCCESS