00:00:00.000 Started by upstream project "autotest-per-patch" build number 131255 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.021 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.022 The recommended git tool is: git 00:00:00.023 using credential 00000000-0000-0000-0000-000000000002 00:00:00.025 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.042 Fetching changes from the remote Git repository 00:00:00.043 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.067 Using shallow fetch with depth 1 00:00:00.067 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.067 > git --version # timeout=10 00:00:00.113 > git --version # 'git version 2.39.2' 00:00:00.113 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.187 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.187 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.081 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.095 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.109 Checking out Revision 58e4f482292076ec19d68e6712473e60ef956aed (FETCH_HEAD) 00:00:03.109 > git config core.sparsecheckout # timeout=10 00:00:03.122 > git read-tree -mu HEAD # timeout=10 00:00:03.140 > git checkout -f 58e4f482292076ec19d68e6712473e60ef956aed # timeout=5 00:00:03.160 Commit message: "packer: Fix typo in a package name" 00:00:03.160 > git rev-list --no-walk 58e4f482292076ec19d68e6712473e60ef956aed # timeout=10 00:00:03.274 [Pipeline] Start of Pipeline 00:00:03.289 [Pipeline] library 00:00:03.291 Loading library shm_lib@master 00:00:03.291 Library shm_lib@master is cached. Copying from home. 00:00:03.311 [Pipeline] node 00:00:03.337 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:03.339 [Pipeline] { 00:00:03.350 [Pipeline] catchError 00:00:03.352 [Pipeline] { 00:00:03.367 [Pipeline] wrap 00:00:03.377 [Pipeline] { 00:00:03.387 [Pipeline] stage 00:00:03.390 [Pipeline] { (Prologue) 00:00:03.604 [Pipeline] sh 00:00:03.894 + logger -p user.info -t JENKINS-CI 00:00:03.917 [Pipeline] echo 00:00:03.919 Node: WFP20 00:00:03.925 [Pipeline] sh 00:00:04.223 [Pipeline] setCustomBuildProperty 00:00:04.236 [Pipeline] echo 00:00:04.238 Cleanup processes 00:00:04.244 [Pipeline] sh 00:00:04.530 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.530 3713281 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.542 [Pipeline] sh 00:00:04.824 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.824 ++ grep -v 'sudo pgrep' 00:00:04.824 ++ awk '{print $1}' 00:00:04.824 + sudo kill -9 00:00:04.824 + true 00:00:04.839 [Pipeline] cleanWs 00:00:04.850 [WS-CLEANUP] Deleting project workspace... 00:00:04.850 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.857 [WS-CLEANUP] done 00:00:04.861 [Pipeline] setCustomBuildProperty 00:00:04.874 [Pipeline] sh 00:00:05.155 + sudo git config --global --replace-all safe.directory '*' 00:00:05.245 [Pipeline] httpRequest 00:00:05.643 [Pipeline] echo 00:00:05.645 Sorcerer 10.211.164.101 is alive 00:00:05.653 [Pipeline] retry 00:00:05.655 [Pipeline] { 00:00:05.666 [Pipeline] httpRequest 00:00:05.670 HttpMethod: GET 00:00:05.670 URL: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:05.670 Sending request to url: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:05.672 Response Code: HTTP/1.1 200 OK 00:00:05.673 Success: Status code 200 is in the accepted range: 200,404 00:00:05.673 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:05.819 [Pipeline] } 00:00:05.840 [Pipeline] // retry 00:00:05.848 [Pipeline] sh 00:00:06.131 + tar --no-same-owner -xf jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:06.147 [Pipeline] httpRequest 00:00:06.529 [Pipeline] echo 00:00:06.531 Sorcerer 10.211.164.101 is alive 00:00:06.540 [Pipeline] retry 00:00:06.542 [Pipeline] { 00:00:06.556 [Pipeline] httpRequest 00:00:06.560 HttpMethod: GET 00:00:06.560 URL: http://10.211.164.101/packages/spdk_cca20a51aa62a7266332056f27116925eb8713a3.tar.gz 00:00:06.561 Sending request to url: http://10.211.164.101/packages/spdk_cca20a51aa62a7266332056f27116925eb8713a3.tar.gz 00:00:06.562 Response Code: HTTP/1.1 200 OK 00:00:06.562 Success: Status code 200 is in the accepted range: 200,404 00:00:06.563 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_cca20a51aa62a7266332056f27116925eb8713a3.tar.gz 00:00:20.243 [Pipeline] } 00:00:20.260 [Pipeline] // retry 00:00:20.268 [Pipeline] sh 00:00:20.551 + tar --no-same-owner -xf spdk_cca20a51aa62a7266332056f27116925eb8713a3.tar.gz 00:00:23.106 [Pipeline] sh 00:00:23.390 + git -C spdk log --oneline -n5 00:00:23.391 cca20a51a pkgdep/git: Add patches to ICE driver for changes in >= 6.11 kernels 00:00:23.391 64ea4b87c pkgdep/git: Add small patch to irdma for >= 6.11 kernels 00:00:23.391 5a8c76d99 lib/nvmf: Add spdk_nvmf_send_discovery_log_notice API 00:00:23.391 a70c3a90b bdev/lvol: add allocated clusters num in bdev_lvol_get_lvols 00:00:23.391 c26697bf5 bdev_ut: Comparison operator and tests fixes 00:00:23.402 [Pipeline] } 00:00:23.420 [Pipeline] // stage 00:00:23.432 [Pipeline] stage 00:00:23.435 [Pipeline] { (Prepare) 00:00:23.455 [Pipeline] writeFile 00:00:23.471 [Pipeline] sh 00:00:23.757 + logger -p user.info -t JENKINS-CI 00:00:23.770 [Pipeline] sh 00:00:24.057 + logger -p user.info -t JENKINS-CI 00:00:24.071 [Pipeline] sh 00:00:24.357 + cat autorun-spdk.conf 00:00:24.357 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:24.357 SPDK_TEST_FUZZER_SHORT=1 00:00:24.357 SPDK_TEST_FUZZER=1 00:00:24.357 SPDK_TEST_SETUP=1 00:00:24.357 SPDK_RUN_UBSAN=1 00:00:24.364 RUN_NIGHTLY=0 00:00:24.371 [Pipeline] readFile 00:00:24.401 [Pipeline] withEnv 00:00:24.403 [Pipeline] { 00:00:24.414 [Pipeline] sh 00:00:24.698 + set -ex 00:00:24.698 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:24.698 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:24.698 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:24.698 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:24.698 ++ SPDK_TEST_FUZZER=1 00:00:24.698 ++ SPDK_TEST_SETUP=1 00:00:24.698 ++ SPDK_RUN_UBSAN=1 00:00:24.698 ++ RUN_NIGHTLY=0 00:00:24.698 + case $SPDK_TEST_NVMF_NICS in 00:00:24.698 + DRIVERS= 00:00:24.698 + [[ -n '' ]] 00:00:24.698 + exit 0 00:00:24.709 [Pipeline] } 00:00:24.726 [Pipeline] // withEnv 00:00:24.731 [Pipeline] } 00:00:24.746 [Pipeline] // stage 00:00:24.756 [Pipeline] catchError 00:00:24.758 [Pipeline] { 00:00:24.772 [Pipeline] timeout 00:00:24.772 Timeout set to expire in 30 min 00:00:24.775 [Pipeline] { 00:00:24.789 [Pipeline] stage 00:00:24.791 [Pipeline] { (Tests) 00:00:24.806 [Pipeline] sh 00:00:25.091 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:25.091 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:25.091 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:25.091 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:25.091 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:25.091 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:25.091 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:25.091 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:25.091 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:25.091 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:25.091 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:00:25.091 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:25.091 + source /etc/os-release 00:00:25.091 ++ NAME='Fedora Linux' 00:00:25.091 ++ VERSION='39 (Cloud Edition)' 00:00:25.091 ++ ID=fedora 00:00:25.091 ++ VERSION_ID=39 00:00:25.091 ++ VERSION_CODENAME= 00:00:25.091 ++ PLATFORM_ID=platform:f39 00:00:25.091 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:25.091 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:25.091 ++ LOGO=fedora-logo-icon 00:00:25.091 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:25.091 ++ HOME_URL=https://fedoraproject.org/ 00:00:25.091 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:25.091 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:25.091 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:25.091 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:25.091 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:25.091 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:25.091 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:25.091 ++ SUPPORT_END=2024-11-12 00:00:25.091 ++ VARIANT='Cloud Edition' 00:00:25.091 ++ VARIANT_ID=cloud 00:00:25.091 + uname -a 00:00:25.091 Linux spdk-wfp-20 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:25.091 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:28.389 Hugepages 00:00:28.389 node hugesize free / total 00:00:28.389 node0 1048576kB 0 / 0 00:00:28.389 node0 2048kB 0 / 0 00:00:28.389 node1 1048576kB 0 / 0 00:00:28.389 node1 2048kB 0 / 0 00:00:28.389 00:00:28.389 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:28.389 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:28.389 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:28.389 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:28.389 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:28.389 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:28.389 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:28.389 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:28.389 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:28.389 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:28.389 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:28.389 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:28.389 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:28.389 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:28.389 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:28.389 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:28.389 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:28.389 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:28.389 + rm -f /tmp/spdk-ld-path 00:00:28.389 + source autorun-spdk.conf 00:00:28.389 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.389 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:28.389 ++ SPDK_TEST_FUZZER=1 00:00:28.389 ++ SPDK_TEST_SETUP=1 00:00:28.389 ++ SPDK_RUN_UBSAN=1 00:00:28.389 ++ RUN_NIGHTLY=0 00:00:28.389 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:28.389 + [[ -n '' ]] 00:00:28.389 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:28.389 + for M in /var/spdk/build-*-manifest.txt 00:00:28.389 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:28.389 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:28.389 + for M in /var/spdk/build-*-manifest.txt 00:00:28.389 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:28.389 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:28.389 + for M in /var/spdk/build-*-manifest.txt 00:00:28.389 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:28.389 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:28.389 ++ uname 00:00:28.389 + [[ Linux == \L\i\n\u\x ]] 00:00:28.389 + sudo dmesg -T 00:00:28.389 + sudo dmesg --clear 00:00:28.389 + dmesg_pid=3714179 00:00:28.389 + [[ Fedora Linux == FreeBSD ]] 00:00:28.389 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:28.389 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:28.389 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:28.389 + [[ -x /usr/src/fio-static/fio ]] 00:00:28.389 + export FIO_BIN=/usr/src/fio-static/fio 00:00:28.389 + FIO_BIN=/usr/src/fio-static/fio 00:00:28.389 + sudo dmesg -Tw 00:00:28.389 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:28.389 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:28.389 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:28.389 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:28.389 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:28.389 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:28.389 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:28.389 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:28.389 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:28.389 Test configuration: 00:00:28.389 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.389 SPDK_TEST_FUZZER_SHORT=1 00:00:28.389 SPDK_TEST_FUZZER=1 00:00:28.389 SPDK_TEST_SETUP=1 00:00:28.389 SPDK_RUN_UBSAN=1 00:00:28.389 RUN_NIGHTLY=0 13:08:36 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:00:28.389 13:08:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:28.389 13:08:36 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:28.389 13:08:36 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:28.389 13:08:36 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:28.389 13:08:36 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:28.389 13:08:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:28.389 13:08:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:28.389 13:08:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:28.389 13:08:36 -- paths/export.sh@5 -- $ export PATH 00:00:28.389 13:08:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:28.389 13:08:36 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:28.389 13:08:36 -- common/autobuild_common.sh@486 -- $ date +%s 00:00:28.389 13:08:36 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729163316.XXXXXX 00:00:28.389 13:08:36 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729163316.UEIpr2 00:00:28.389 13:08:36 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:00:28.389 13:08:36 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:00:28.389 13:08:36 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:28.389 13:08:36 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:28.389 13:08:36 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:28.389 13:08:36 -- common/autobuild_common.sh@502 -- $ get_config_params 00:00:28.389 13:08:36 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:00:28.389 13:08:36 -- common/autotest_common.sh@10 -- $ set +x 00:00:28.389 13:08:36 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:28.389 13:08:36 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:00:28.389 13:08:36 -- pm/common@17 -- $ local monitor 00:00:28.389 13:08:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:28.389 13:08:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:28.389 13:08:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:28.389 13:08:36 -- pm/common@21 -- $ date +%s 00:00:28.389 13:08:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:28.389 13:08:36 -- pm/common@21 -- $ date +%s 00:00:28.389 13:08:36 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729163316 00:00:28.389 13:08:36 -- pm/common@25 -- $ sleep 1 00:00:28.389 13:08:36 -- pm/common@21 -- $ date +%s 00:00:28.389 13:08:36 -- pm/common@21 -- $ date +%s 00:00:28.389 13:08:36 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729163316 00:00:28.389 13:08:36 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729163316 00:00:28.389 13:08:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729163316 00:00:28.389 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729163316_collect-cpu-temp.pm.log 00:00:28.389 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729163316_collect-vmstat.pm.log 00:00:28.389 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729163316_collect-cpu-load.pm.log 00:00:28.389 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729163316_collect-bmc-pm.bmc.pm.log 00:00:29.328 13:08:37 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:00:29.328 13:08:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:29.328 13:08:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:29.328 13:08:37 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:29.328 13:08:37 -- spdk/autobuild.sh@16 -- $ date -u 00:00:29.328 Thu Oct 17 11:08:37 AM UTC 2024 00:00:29.328 13:08:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:29.328 v25.01-pre-72-gcca20a51a 00:00:29.328 13:08:37 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:29.328 13:08:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:29.329 13:08:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:29.329 13:08:37 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:29.329 13:08:37 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:29.329 13:08:37 -- common/autotest_common.sh@10 -- $ set +x 00:00:29.588 ************************************ 00:00:29.588 START TEST ubsan 00:00:29.588 ************************************ 00:00:29.588 13:08:37 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:29.588 using ubsan 00:00:29.588 00:00:29.588 real 0m0.001s 00:00:29.588 user 0m0.000s 00:00:29.588 sys 0m0.000s 00:00:29.588 13:08:37 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:29.588 13:08:37 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:29.588 ************************************ 00:00:29.588 END TEST ubsan 00:00:29.588 ************************************ 00:00:29.588 13:08:37 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:29.588 13:08:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:29.588 13:08:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:29.588 13:08:37 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:29.588 13:08:37 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:29.588 13:08:37 -- common/autobuild_common.sh@438 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:29.588 13:08:37 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:00:29.588 13:08:37 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:29.588 13:08:37 -- common/autotest_common.sh@10 -- $ set +x 00:00:29.588 ************************************ 00:00:29.588 START TEST autobuild_llvm_precompile 00:00:29.588 ************************************ 00:00:29.588 13:08:37 autobuild_llvm_precompile -- common/autotest_common.sh@1125 -- $ _llvm_precompile 00:00:29.588 13:08:37 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:00:29.588 13:08:37 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:00:29.588 Target: x86_64-redhat-linux-gnu 00:00:29.588 Thread model: posix 00:00:29.588 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:29.588 13:08:37 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:00:29.588 13:08:37 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:00:29.588 13:08:37 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:00:29.588 13:08:37 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:00:29.588 13:08:37 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:00:29.588 13:08:37 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:29.588 13:08:37 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:00:29.588 13:08:37 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:00:29.588 13:08:37 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:00:29.588 13:08:37 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:00:29.847 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:29.847 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:30.106 Using 'verbs' RDMA provider 00:00:45.932 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:58.147 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:58.716 Creating mk/config.mk...done. 00:00:58.716 Creating mk/cc.flags.mk...done. 00:00:58.716 Type 'make' to build. 00:00:58.716 00:00:58.716 real 0m28.999s 00:00:58.716 user 0m12.646s 00:00:58.716 sys 0m15.611s 00:00:58.716 13:09:06 autobuild_llvm_precompile -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:58.716 13:09:06 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:00:58.716 ************************************ 00:00:58.716 END TEST autobuild_llvm_precompile 00:00:58.716 ************************************ 00:00:58.716 13:09:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:58.716 13:09:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:58.716 13:09:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:58.716 13:09:06 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:00:58.716 13:09:06 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:00:58.975 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:58.976 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:59.235 Using 'verbs' RDMA provider 00:01:12.385 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:22.369 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:23.197 Creating mk/config.mk...done. 00:01:23.197 Creating mk/cc.flags.mk...done. 00:01:23.197 Type 'make' to build. 00:01:23.197 13:09:30 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:01:23.197 13:09:30 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:23.197 13:09:30 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:23.197 13:09:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.197 ************************************ 00:01:23.197 START TEST make 00:01:23.197 ************************************ 00:01:23.197 13:09:31 make -- common/autotest_common.sh@1125 -- $ make -j112 00:01:23.455 make[1]: Nothing to be done for 'all'. 00:01:24.837 The Meson build system 00:01:24.837 Version: 1.5.0 00:01:24.837 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:24.837 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:24.837 Build type: native build 00:01:24.837 Project name: libvfio-user 00:01:24.837 Project version: 0.0.1 00:01:24.837 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:01:24.837 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:01:24.837 Host machine cpu family: x86_64 00:01:24.837 Host machine cpu: x86_64 00:01:24.837 Run-time dependency threads found: YES 00:01:24.837 Library dl found: YES 00:01:24.837 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:24.837 Run-time dependency json-c found: YES 0.17 00:01:24.837 Run-time dependency cmocka found: YES 1.1.7 00:01:24.837 Program pytest-3 found: NO 00:01:24.837 Program flake8 found: NO 00:01:24.837 Program misspell-fixer found: NO 00:01:24.837 Program restructuredtext-lint found: NO 00:01:24.837 Program valgrind found: YES (/usr/bin/valgrind) 00:01:24.837 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:24.837 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:24.837 Compiler for C supports arguments -Wwrite-strings: YES 00:01:24.837 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:24.837 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:24.837 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:24.837 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:24.837 Build targets in project: 8 00:01:24.837 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:24.837 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:24.837 00:01:24.837 libvfio-user 0.0.1 00:01:24.837 00:01:24.837 User defined options 00:01:24.837 buildtype : debug 00:01:24.837 default_library: static 00:01:24.837 libdir : /usr/local/lib 00:01:24.837 00:01:24.837 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:25.404 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:25.404 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:25.404 [2/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:25.404 [3/36] Compiling C object samples/null.p/null.c.o 00:01:25.404 [4/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:25.404 [5/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:25.404 [6/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:25.404 [7/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:25.404 [8/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:25.404 [9/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:25.404 [10/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:25.404 [11/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:25.404 [12/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:25.404 [13/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:25.404 [14/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:25.404 [15/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:25.404 [16/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:25.404 [17/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:25.404 [18/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:25.404 [19/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:25.404 [20/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:25.404 [21/36] Compiling C object samples/server.p/server.c.o 00:01:25.404 [22/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:25.404 [23/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:25.404 [24/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:25.404 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:25.404 [26/36] Compiling C object samples/client.p/client.c.o 00:01:25.404 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:25.404 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:25.404 [29/36] Linking static target lib/libvfio-user.a 00:01:25.404 [30/36] Linking target samples/client 00:01:25.663 [31/36] Linking target samples/server 00:01:25.663 [32/36] Linking target samples/shadow_ioeventfd_server 00:01:25.663 [33/36] Linking target samples/null 00:01:25.663 [34/36] Linking target samples/gpio-pci-idio-16 00:01:25.663 [35/36] Linking target samples/lspci 00:01:25.663 [36/36] Linking target test/unit_tests 00:01:25.663 INFO: autodetecting backend as ninja 00:01:25.663 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:25.663 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:25.922 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:25.922 ninja: no work to do. 00:01:31.191 The Meson build system 00:01:31.191 Version: 1.5.0 00:01:31.191 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:01:31.191 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:01:31.191 Build type: native build 00:01:31.191 Program cat found: YES (/usr/bin/cat) 00:01:31.191 Project name: DPDK 00:01:31.191 Project version: 24.03.0 00:01:31.191 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:01:31.191 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:01:31.191 Host machine cpu family: x86_64 00:01:31.191 Host machine cpu: x86_64 00:01:31.191 Message: ## Building in Developer Mode ## 00:01:31.191 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:31.191 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:31.191 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:31.191 Program python3 found: YES (/usr/bin/python3) 00:01:31.191 Program cat found: YES (/usr/bin/cat) 00:01:31.191 Compiler for C supports arguments -march=native: YES 00:01:31.191 Checking for size of "void *" : 8 00:01:31.191 Checking for size of "void *" : 8 (cached) 00:01:31.191 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:31.191 Library m found: YES 00:01:31.191 Library numa found: YES 00:01:31.191 Has header "numaif.h" : YES 00:01:31.191 Library fdt found: NO 00:01:31.191 Library execinfo found: NO 00:01:31.191 Has header "execinfo.h" : YES 00:01:31.191 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:31.191 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:31.191 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:31.191 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:31.191 Run-time dependency openssl found: YES 3.1.1 00:01:31.191 Run-time dependency libpcap found: YES 1.10.4 00:01:31.191 Has header "pcap.h" with dependency libpcap: YES 00:01:31.191 Compiler for C supports arguments -Wcast-qual: YES 00:01:31.191 Compiler for C supports arguments -Wdeprecated: YES 00:01:31.191 Compiler for C supports arguments -Wformat: YES 00:01:31.191 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:31.191 Compiler for C supports arguments -Wformat-security: YES 00:01:31.191 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:31.191 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:31.191 Compiler for C supports arguments -Wnested-externs: YES 00:01:31.191 Compiler for C supports arguments -Wold-style-definition: YES 00:01:31.191 Compiler for C supports arguments -Wpointer-arith: YES 00:01:31.191 Compiler for C supports arguments -Wsign-compare: YES 00:01:31.191 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:31.191 Compiler for C supports arguments -Wundef: YES 00:01:31.191 Compiler for C supports arguments -Wwrite-strings: YES 00:01:31.191 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:31.191 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:31.191 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:31.191 Program objdump found: YES (/usr/bin/objdump) 00:01:31.191 Compiler for C supports arguments -mavx512f: YES 00:01:31.191 Checking if "AVX512 checking" compiles: YES 00:01:31.191 Fetching value of define "__SSE4_2__" : 1 00:01:31.191 Fetching value of define "__AES__" : 1 00:01:31.191 Fetching value of define "__AVX__" : 1 00:01:31.191 Fetching value of define "__AVX2__" : 1 00:01:31.191 Fetching value of define "__AVX512BW__" : 1 00:01:31.191 Fetching value of define "__AVX512CD__" : 1 00:01:31.191 Fetching value of define "__AVX512DQ__" : 1 00:01:31.191 Fetching value of define "__AVX512F__" : 1 00:01:31.191 Fetching value of define "__AVX512VL__" : 1 00:01:31.191 Fetching value of define "__PCLMUL__" : 1 00:01:31.191 Fetching value of define "__RDRND__" : 1 00:01:31.191 Fetching value of define "__RDSEED__" : 1 00:01:31.191 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:31.191 Fetching value of define "__znver1__" : (undefined) 00:01:31.191 Fetching value of define "__znver2__" : (undefined) 00:01:31.191 Fetching value of define "__znver3__" : (undefined) 00:01:31.191 Fetching value of define "__znver4__" : (undefined) 00:01:31.191 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:31.191 Message: lib/log: Defining dependency "log" 00:01:31.191 Message: lib/kvargs: Defining dependency "kvargs" 00:01:31.191 Message: lib/telemetry: Defining dependency "telemetry" 00:01:31.191 Checking for function "getentropy" : NO 00:01:31.191 Message: lib/eal: Defining dependency "eal" 00:01:31.191 Message: lib/ring: Defining dependency "ring" 00:01:31.191 Message: lib/rcu: Defining dependency "rcu" 00:01:31.191 Message: lib/mempool: Defining dependency "mempool" 00:01:31.191 Message: lib/mbuf: Defining dependency "mbuf" 00:01:31.191 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:31.191 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:31.191 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:31.191 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:31.191 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:31.191 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:31.191 Compiler for C supports arguments -mpclmul: YES 00:01:31.191 Compiler for C supports arguments -maes: YES 00:01:31.191 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:31.191 Compiler for C supports arguments -mavx512bw: YES 00:01:31.191 Compiler for C supports arguments -mavx512dq: YES 00:01:31.191 Compiler for C supports arguments -mavx512vl: YES 00:01:31.191 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:31.192 Compiler for C supports arguments -mavx2: YES 00:01:31.192 Compiler for C supports arguments -mavx: YES 00:01:31.192 Message: lib/net: Defining dependency "net" 00:01:31.192 Message: lib/meter: Defining dependency "meter" 00:01:31.192 Message: lib/ethdev: Defining dependency "ethdev" 00:01:31.192 Message: lib/pci: Defining dependency "pci" 00:01:31.192 Message: lib/cmdline: Defining dependency "cmdline" 00:01:31.192 Message: lib/hash: Defining dependency "hash" 00:01:31.192 Message: lib/timer: Defining dependency "timer" 00:01:31.192 Message: lib/compressdev: Defining dependency "compressdev" 00:01:31.192 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:31.192 Message: lib/dmadev: Defining dependency "dmadev" 00:01:31.192 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:31.192 Message: lib/power: Defining dependency "power" 00:01:31.192 Message: lib/reorder: Defining dependency "reorder" 00:01:31.192 Message: lib/security: Defining dependency "security" 00:01:31.192 Has header "linux/userfaultfd.h" : YES 00:01:31.192 Has header "linux/vduse.h" : YES 00:01:31.192 Message: lib/vhost: Defining dependency "vhost" 00:01:31.192 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:31.192 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:31.192 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:31.192 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:31.192 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:31.192 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:31.192 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:31.192 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:31.192 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:31.192 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:31.192 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:31.192 Configuring doxy-api-html.conf using configuration 00:01:31.192 Configuring doxy-api-man.conf using configuration 00:01:31.192 Program mandb found: YES (/usr/bin/mandb) 00:01:31.192 Program sphinx-build found: NO 00:01:31.192 Configuring rte_build_config.h using configuration 00:01:31.192 Message: 00:01:31.192 ================= 00:01:31.192 Applications Enabled 00:01:31.192 ================= 00:01:31.192 00:01:31.192 apps: 00:01:31.192 00:01:31.192 00:01:31.192 Message: 00:01:31.192 ================= 00:01:31.192 Libraries Enabled 00:01:31.192 ================= 00:01:31.192 00:01:31.192 libs: 00:01:31.192 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:31.192 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:31.192 cryptodev, dmadev, power, reorder, security, vhost, 00:01:31.192 00:01:31.192 Message: 00:01:31.192 =============== 00:01:31.192 Drivers Enabled 00:01:31.192 =============== 00:01:31.192 00:01:31.192 common: 00:01:31.192 00:01:31.192 bus: 00:01:31.192 pci, vdev, 00:01:31.192 mempool: 00:01:31.192 ring, 00:01:31.192 dma: 00:01:31.192 00:01:31.192 net: 00:01:31.192 00:01:31.192 crypto: 00:01:31.192 00:01:31.192 compress: 00:01:31.192 00:01:31.192 vdpa: 00:01:31.192 00:01:31.192 00:01:31.192 Message: 00:01:31.192 ================= 00:01:31.192 Content Skipped 00:01:31.192 ================= 00:01:31.192 00:01:31.192 apps: 00:01:31.192 dumpcap: explicitly disabled via build config 00:01:31.192 graph: explicitly disabled via build config 00:01:31.192 pdump: explicitly disabled via build config 00:01:31.192 proc-info: explicitly disabled via build config 00:01:31.192 test-acl: explicitly disabled via build config 00:01:31.192 test-bbdev: explicitly disabled via build config 00:01:31.192 test-cmdline: explicitly disabled via build config 00:01:31.192 test-compress-perf: explicitly disabled via build config 00:01:31.192 test-crypto-perf: explicitly disabled via build config 00:01:31.192 test-dma-perf: explicitly disabled via build config 00:01:31.192 test-eventdev: explicitly disabled via build config 00:01:31.192 test-fib: explicitly disabled via build config 00:01:31.192 test-flow-perf: explicitly disabled via build config 00:01:31.192 test-gpudev: explicitly disabled via build config 00:01:31.192 test-mldev: explicitly disabled via build config 00:01:31.192 test-pipeline: explicitly disabled via build config 00:01:31.192 test-pmd: explicitly disabled via build config 00:01:31.192 test-regex: explicitly disabled via build config 00:01:31.192 test-sad: explicitly disabled via build config 00:01:31.192 test-security-perf: explicitly disabled via build config 00:01:31.192 00:01:31.192 libs: 00:01:31.192 argparse: explicitly disabled via build config 00:01:31.192 metrics: explicitly disabled via build config 00:01:31.192 acl: explicitly disabled via build config 00:01:31.192 bbdev: explicitly disabled via build config 00:01:31.192 bitratestats: explicitly disabled via build config 00:01:31.192 bpf: explicitly disabled via build config 00:01:31.192 cfgfile: explicitly disabled via build config 00:01:31.192 distributor: explicitly disabled via build config 00:01:31.192 efd: explicitly disabled via build config 00:01:31.192 eventdev: explicitly disabled via build config 00:01:31.192 dispatcher: explicitly disabled via build config 00:01:31.192 gpudev: explicitly disabled via build config 00:01:31.192 gro: explicitly disabled via build config 00:01:31.192 gso: explicitly disabled via build config 00:01:31.192 ip_frag: explicitly disabled via build config 00:01:31.192 jobstats: explicitly disabled via build config 00:01:31.192 latencystats: explicitly disabled via build config 00:01:31.192 lpm: explicitly disabled via build config 00:01:31.192 member: explicitly disabled via build config 00:01:31.192 pcapng: explicitly disabled via build config 00:01:31.192 rawdev: explicitly disabled via build config 00:01:31.192 regexdev: explicitly disabled via build config 00:01:31.192 mldev: explicitly disabled via build config 00:01:31.192 rib: explicitly disabled via build config 00:01:31.192 sched: explicitly disabled via build config 00:01:31.192 stack: explicitly disabled via build config 00:01:31.192 ipsec: explicitly disabled via build config 00:01:31.192 pdcp: explicitly disabled via build config 00:01:31.192 fib: explicitly disabled via build config 00:01:31.192 port: explicitly disabled via build config 00:01:31.192 pdump: explicitly disabled via build config 00:01:31.192 table: explicitly disabled via build config 00:01:31.192 pipeline: explicitly disabled via build config 00:01:31.192 graph: explicitly disabled via build config 00:01:31.192 node: explicitly disabled via build config 00:01:31.192 00:01:31.192 drivers: 00:01:31.192 common/cpt: not in enabled drivers build config 00:01:31.192 common/dpaax: not in enabled drivers build config 00:01:31.192 common/iavf: not in enabled drivers build config 00:01:31.192 common/idpf: not in enabled drivers build config 00:01:31.192 common/ionic: not in enabled drivers build config 00:01:31.192 common/mvep: not in enabled drivers build config 00:01:31.192 common/octeontx: not in enabled drivers build config 00:01:31.192 bus/auxiliary: not in enabled drivers build config 00:01:31.192 bus/cdx: not in enabled drivers build config 00:01:31.192 bus/dpaa: not in enabled drivers build config 00:01:31.192 bus/fslmc: not in enabled drivers build config 00:01:31.192 bus/ifpga: not in enabled drivers build config 00:01:31.192 bus/platform: not in enabled drivers build config 00:01:31.192 bus/uacce: not in enabled drivers build config 00:01:31.192 bus/vmbus: not in enabled drivers build config 00:01:31.192 common/cnxk: not in enabled drivers build config 00:01:31.192 common/mlx5: not in enabled drivers build config 00:01:31.192 common/nfp: not in enabled drivers build config 00:01:31.192 common/nitrox: not in enabled drivers build config 00:01:31.192 common/qat: not in enabled drivers build config 00:01:31.192 common/sfc_efx: not in enabled drivers build config 00:01:31.192 mempool/bucket: not in enabled drivers build config 00:01:31.192 mempool/cnxk: not in enabled drivers build config 00:01:31.192 mempool/dpaa: not in enabled drivers build config 00:01:31.192 mempool/dpaa2: not in enabled drivers build config 00:01:31.192 mempool/octeontx: not in enabled drivers build config 00:01:31.192 mempool/stack: not in enabled drivers build config 00:01:31.192 dma/cnxk: not in enabled drivers build config 00:01:31.192 dma/dpaa: not in enabled drivers build config 00:01:31.192 dma/dpaa2: not in enabled drivers build config 00:01:31.192 dma/hisilicon: not in enabled drivers build config 00:01:31.192 dma/idxd: not in enabled drivers build config 00:01:31.192 dma/ioat: not in enabled drivers build config 00:01:31.192 dma/skeleton: not in enabled drivers build config 00:01:31.192 net/af_packet: not in enabled drivers build config 00:01:31.192 net/af_xdp: not in enabled drivers build config 00:01:31.192 net/ark: not in enabled drivers build config 00:01:31.192 net/atlantic: not in enabled drivers build config 00:01:31.192 net/avp: not in enabled drivers build config 00:01:31.192 net/axgbe: not in enabled drivers build config 00:01:31.192 net/bnx2x: not in enabled drivers build config 00:01:31.192 net/bnxt: not in enabled drivers build config 00:01:31.192 net/bonding: not in enabled drivers build config 00:01:31.192 net/cnxk: not in enabled drivers build config 00:01:31.192 net/cpfl: not in enabled drivers build config 00:01:31.192 net/cxgbe: not in enabled drivers build config 00:01:31.192 net/dpaa: not in enabled drivers build config 00:01:31.192 net/dpaa2: not in enabled drivers build config 00:01:31.192 net/e1000: not in enabled drivers build config 00:01:31.192 net/ena: not in enabled drivers build config 00:01:31.192 net/enetc: not in enabled drivers build config 00:01:31.192 net/enetfec: not in enabled drivers build config 00:01:31.192 net/enic: not in enabled drivers build config 00:01:31.192 net/failsafe: not in enabled drivers build config 00:01:31.192 net/fm10k: not in enabled drivers build config 00:01:31.192 net/gve: not in enabled drivers build config 00:01:31.192 net/hinic: not in enabled drivers build config 00:01:31.192 net/hns3: not in enabled drivers build config 00:01:31.193 net/i40e: not in enabled drivers build config 00:01:31.193 net/iavf: not in enabled drivers build config 00:01:31.193 net/ice: not in enabled drivers build config 00:01:31.193 net/idpf: not in enabled drivers build config 00:01:31.193 net/igc: not in enabled drivers build config 00:01:31.193 net/ionic: not in enabled drivers build config 00:01:31.193 net/ipn3ke: not in enabled drivers build config 00:01:31.193 net/ixgbe: not in enabled drivers build config 00:01:31.193 net/mana: not in enabled drivers build config 00:01:31.193 net/memif: not in enabled drivers build config 00:01:31.193 net/mlx4: not in enabled drivers build config 00:01:31.193 net/mlx5: not in enabled drivers build config 00:01:31.193 net/mvneta: not in enabled drivers build config 00:01:31.193 net/mvpp2: not in enabled drivers build config 00:01:31.193 net/netvsc: not in enabled drivers build config 00:01:31.193 net/nfb: not in enabled drivers build config 00:01:31.193 net/nfp: not in enabled drivers build config 00:01:31.193 net/ngbe: not in enabled drivers build config 00:01:31.193 net/null: not in enabled drivers build config 00:01:31.193 net/octeontx: not in enabled drivers build config 00:01:31.193 net/octeon_ep: not in enabled drivers build config 00:01:31.193 net/pcap: not in enabled drivers build config 00:01:31.193 net/pfe: not in enabled drivers build config 00:01:31.193 net/qede: not in enabled drivers build config 00:01:31.193 net/ring: not in enabled drivers build config 00:01:31.193 net/sfc: not in enabled drivers build config 00:01:31.193 net/softnic: not in enabled drivers build config 00:01:31.193 net/tap: not in enabled drivers build config 00:01:31.193 net/thunderx: not in enabled drivers build config 00:01:31.193 net/txgbe: not in enabled drivers build config 00:01:31.193 net/vdev_netvsc: not in enabled drivers build config 00:01:31.193 net/vhost: not in enabled drivers build config 00:01:31.193 net/virtio: not in enabled drivers build config 00:01:31.193 net/vmxnet3: not in enabled drivers build config 00:01:31.193 raw/*: missing internal dependency, "rawdev" 00:01:31.193 crypto/armv8: not in enabled drivers build config 00:01:31.193 crypto/bcmfs: not in enabled drivers build config 00:01:31.193 crypto/caam_jr: not in enabled drivers build config 00:01:31.193 crypto/ccp: not in enabled drivers build config 00:01:31.193 crypto/cnxk: not in enabled drivers build config 00:01:31.193 crypto/dpaa_sec: not in enabled drivers build config 00:01:31.193 crypto/dpaa2_sec: not in enabled drivers build config 00:01:31.193 crypto/ipsec_mb: not in enabled drivers build config 00:01:31.193 crypto/mlx5: not in enabled drivers build config 00:01:31.193 crypto/mvsam: not in enabled drivers build config 00:01:31.193 crypto/nitrox: not in enabled drivers build config 00:01:31.193 crypto/null: not in enabled drivers build config 00:01:31.193 crypto/octeontx: not in enabled drivers build config 00:01:31.193 crypto/openssl: not in enabled drivers build config 00:01:31.193 crypto/scheduler: not in enabled drivers build config 00:01:31.193 crypto/uadk: not in enabled drivers build config 00:01:31.193 crypto/virtio: not in enabled drivers build config 00:01:31.193 compress/isal: not in enabled drivers build config 00:01:31.193 compress/mlx5: not in enabled drivers build config 00:01:31.193 compress/nitrox: not in enabled drivers build config 00:01:31.193 compress/octeontx: not in enabled drivers build config 00:01:31.193 compress/zlib: not in enabled drivers build config 00:01:31.193 regex/*: missing internal dependency, "regexdev" 00:01:31.193 ml/*: missing internal dependency, "mldev" 00:01:31.193 vdpa/ifc: not in enabled drivers build config 00:01:31.193 vdpa/mlx5: not in enabled drivers build config 00:01:31.193 vdpa/nfp: not in enabled drivers build config 00:01:31.193 vdpa/sfc: not in enabled drivers build config 00:01:31.193 event/*: missing internal dependency, "eventdev" 00:01:31.193 baseband/*: missing internal dependency, "bbdev" 00:01:31.193 gpu/*: missing internal dependency, "gpudev" 00:01:31.193 00:01:31.193 00:01:31.784 Build targets in project: 85 00:01:31.784 00:01:31.784 DPDK 24.03.0 00:01:31.784 00:01:31.784 User defined options 00:01:31.784 buildtype : debug 00:01:31.784 default_library : static 00:01:31.784 libdir : lib 00:01:31.784 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:31.784 c_args : -fPIC -Werror 00:01:31.784 c_link_args : 00:01:31.784 cpu_instruction_set: native 00:01:31.784 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:31.784 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:31.784 enable_docs : false 00:01:31.784 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:31.784 enable_kmods : false 00:01:31.784 max_lcores : 128 00:01:31.784 tests : false 00:01:31.784 00:01:31.784 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:32.045 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:01:32.045 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:32.045 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:32.045 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:32.045 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:32.045 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:32.045 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:32.045 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:32.045 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:32.045 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:32.045 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:32.045 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:32.045 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:32.045 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:32.045 [14/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:32.045 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:32.045 [16/268] Linking static target lib/librte_kvargs.a 00:01:32.045 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:32.045 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:32.307 [19/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:32.307 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:32.307 [21/268] Linking static target lib/librte_log.a 00:01:32.307 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:32.307 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:32.307 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:32.307 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:32.307 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:32.307 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:32.307 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:32.307 [29/268] Linking static target lib/librte_pci.a 00:01:32.307 [30/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:32.307 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:32.307 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:32.307 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:32.307 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:32.307 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:32.569 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:32.569 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:32.569 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:32.569 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:32.569 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:32.569 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:32.569 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:32.569 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:32.569 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:32.569 [45/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:32.569 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:32.569 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:32.569 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:32.569 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:32.569 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:32.569 [51/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.570 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:32.570 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:32.570 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:32.570 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:32.570 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:32.570 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:32.570 [58/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:32.570 [59/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:32.570 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:32.570 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:32.570 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:32.570 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:32.570 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:32.570 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:32.570 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:32.570 [67/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.570 [68/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:32.570 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:32.570 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:32.570 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:32.570 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:32.570 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:32.570 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:32.570 [75/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:32.570 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:32.570 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:32.570 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:32.570 [79/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:32.570 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:32.570 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:32.828 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:32.828 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:32.828 [84/268] Linking static target lib/librte_meter.a 00:01:32.828 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:32.828 [86/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:32.828 [87/268] Linking static target lib/librte_telemetry.a 00:01:32.828 [88/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:32.828 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:32.828 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:32.828 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:32.828 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:32.828 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:32.828 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:32.828 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:32.828 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:32.828 [97/268] Linking static target lib/librte_ring.a 00:01:32.828 [98/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:32.828 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:32.828 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:32.828 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:32.828 [102/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:32.828 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:32.828 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:32.828 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:32.828 [106/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:32.828 [107/268] Linking static target lib/librte_cmdline.a 00:01:32.828 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:32.828 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:32.828 [110/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:32.828 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:32.828 [112/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:32.828 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:32.828 [114/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:32.828 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:32.828 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:32.828 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:32.828 [118/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:32.828 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:32.828 [120/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:32.828 [121/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:32.828 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:32.828 [123/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:32.828 [124/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:32.828 [125/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:32.828 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:32.828 [127/268] Linking static target lib/librte_eal.a 00:01:32.828 [128/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:32.828 [129/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:32.828 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:32.828 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:32.828 [132/268] Linking static target lib/librte_mempool.a 00:01:32.828 [133/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:32.828 [134/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:32.828 [135/268] Linking static target lib/librte_timer.a 00:01:32.828 [136/268] Linking static target lib/librte_rcu.a 00:01:32.828 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:32.828 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:32.828 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:32.828 [140/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:32.828 [141/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:32.828 [142/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:32.828 [143/268] Linking static target lib/librte_net.a 00:01:32.828 [144/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:32.828 [145/268] Linking static target lib/librte_compressdev.a 00:01:32.828 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:32.828 [147/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:32.828 [148/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:32.828 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:32.828 [150/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:32.828 [151/268] Linking static target lib/librte_hash.a 00:01:32.828 [152/268] Linking static target lib/librte_dmadev.a 00:01:33.112 [153/268] Linking static target lib/librte_mbuf.a 00:01:33.112 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:33.112 [155/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:33.112 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:33.112 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:33.112 [158/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.112 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:33.112 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:33.112 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:33.112 [162/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.112 [163/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:33.112 [164/268] Linking target lib/librte_log.so.24.1 00:01:33.112 [165/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:33.112 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:33.112 [167/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:33.112 [168/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:33.112 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:33.112 [170/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.112 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:33.112 [172/268] Linking static target lib/librte_reorder.a 00:01:33.112 [173/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:33.112 [174/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:33.112 [175/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:33.112 [176/268] Linking static target lib/librte_security.a 00:01:33.112 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:33.112 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:33.112 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:33.112 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:33.112 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:33.112 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:33.112 [183/268] Linking static target lib/librte_cryptodev.a 00:01:33.112 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:33.112 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:33.112 [186/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:33.112 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:33.415 [188/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:33.415 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:33.415 [190/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.415 [191/268] Linking static target lib/librte_power.a 00:01:33.415 [192/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:33.415 [193/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.415 [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:33.415 [195/268] Linking target lib/librte_kvargs.so.24.1 00:01:33.415 [196/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:33.415 [197/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:33.415 [198/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.415 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:33.415 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:33.415 [201/268] Linking static target drivers/librte_bus_vdev.a 00:01:33.415 [202/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.415 [203/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:33.415 [204/268] Linking target lib/librte_telemetry.so.24.1 00:01:33.415 [205/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:33.415 [206/268] Linking static target lib/librte_ethdev.a 00:01:33.415 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:33.415 [208/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:33.415 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:33.415 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:33.415 [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:33.415 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:33.415 [213/268] Linking static target drivers/librte_bus_pci.a 00:01:33.415 [214/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:33.716 [215/268] Linking static target drivers/librte_mempool_ring.a 00:01:33.716 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:33.716 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.716 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.716 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.716 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.716 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.716 [222/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.716 [223/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.994 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.994 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.994 [226/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:34.253 [227/268] Linking static target lib/librte_vhost.a 00:01:34.253 [228/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.253 [229/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.632 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.199 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.324 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.264 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.264 [234/268] Linking target lib/librte_eal.so.24.1 00:01:45.264 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:45.264 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:45.264 [237/268] Linking target lib/librte_ring.so.24.1 00:01:45.264 [238/268] Linking target lib/librte_timer.so.24.1 00:01:45.264 [239/268] Linking target lib/librte_meter.so.24.1 00:01:45.264 [240/268] Linking target lib/librte_pci.so.24.1 00:01:45.264 [241/268] Linking target lib/librte_dmadev.so.24.1 00:01:45.524 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:45.524 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:45.524 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:45.524 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:45.524 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:45.524 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:45.524 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:45.524 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:45.784 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:45.784 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:45.784 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:45.784 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:45.784 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:46.043 [255/268] Linking target lib/librte_reorder.so.24.1 00:01:46.043 [256/268] Linking target lib/librte_net.so.24.1 00:01:46.043 [257/268] Linking target lib/librte_compressdev.so.24.1 00:01:46.043 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:46.043 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:46.043 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:46.043 [261/268] Linking target lib/librte_hash.so.24.1 00:01:46.043 [262/268] Linking target lib/librte_ethdev.so.24.1 00:01:46.043 [263/268] Linking target lib/librte_security.so.24.1 00:01:46.043 [264/268] Linking target lib/librte_cmdline.so.24.1 00:01:46.302 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:46.302 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:46.302 [267/268] Linking target lib/librte_vhost.so.24.1 00:01:46.302 [268/268] Linking target lib/librte_power.so.24.1 00:01:46.302 INFO: autodetecting backend as ninja 00:01:46.302 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:47.240 CC lib/ut/ut.o 00:01:47.240 CC lib/log/log.o 00:01:47.240 CC lib/log/log_flags.o 00:01:47.240 CC lib/log/log_deprecated.o 00:01:47.240 CC lib/ut_mock/mock.o 00:01:47.500 LIB libspdk_ut.a 00:01:47.500 LIB libspdk_log.a 00:01:47.500 LIB libspdk_ut_mock.a 00:01:47.759 CC lib/ioat/ioat.o 00:01:47.759 CC lib/util/base64.o 00:01:47.759 CC lib/util/bit_array.o 00:01:47.759 CC lib/util/cpuset.o 00:01:47.759 CC lib/util/crc16.o 00:01:47.759 CC lib/util/crc32.o 00:01:47.759 CC lib/util/crc32c.o 00:01:47.759 CC lib/util/crc32_ieee.o 00:01:47.759 CC lib/util/crc64.o 00:01:47.759 CC lib/util/dif.o 00:01:47.759 CC lib/util/fd.o 00:01:47.759 CC lib/util/fd_group.o 00:01:47.759 CC lib/util/file.o 00:01:47.759 CC lib/util/hexlify.o 00:01:47.759 CC lib/util/iov.o 00:01:47.759 CC lib/util/math.o 00:01:47.759 CC lib/util/net.o 00:01:47.759 CC lib/util/pipe.o 00:01:47.759 CC lib/util/strerror_tls.o 00:01:47.759 CC lib/util/string.o 00:01:47.759 CC lib/util/zipf.o 00:01:47.759 CC lib/util/uuid.o 00:01:47.759 CC lib/util/xor.o 00:01:47.759 CC lib/util/md5.o 00:01:47.759 CC lib/dma/dma.o 00:01:47.759 CXX lib/trace_parser/trace.o 00:01:47.759 CC lib/vfio_user/host/vfio_user_pci.o 00:01:47.759 CC lib/vfio_user/host/vfio_user.o 00:01:47.759 LIB libspdk_ioat.a 00:01:48.018 LIB libspdk_dma.a 00:01:48.018 LIB libspdk_util.a 00:01:48.018 LIB libspdk_vfio_user.a 00:01:48.278 LIB libspdk_trace_parser.a 00:01:48.278 CC lib/json/json_write.o 00:01:48.278 CC lib/rdma_provider/common.o 00:01:48.278 CC lib/json/json_parse.o 00:01:48.278 CC lib/json/json_util.o 00:01:48.278 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:48.278 CC lib/rdma_utils/rdma_utils.o 00:01:48.278 CC lib/conf/conf.o 00:01:48.278 CC lib/idxd/idxd.o 00:01:48.278 CC lib/idxd/idxd_user.o 00:01:48.278 CC lib/idxd/idxd_kernel.o 00:01:48.278 CC lib/vmd/vmd.o 00:01:48.278 CC lib/vmd/led.o 00:01:48.278 CC lib/env_dpdk/env.o 00:01:48.278 CC lib/env_dpdk/memory.o 00:01:48.278 CC lib/env_dpdk/pci.o 00:01:48.278 CC lib/env_dpdk/init.o 00:01:48.278 CC lib/env_dpdk/threads.o 00:01:48.278 CC lib/env_dpdk/pci_ioat.o 00:01:48.278 CC lib/env_dpdk/pci_virtio.o 00:01:48.278 CC lib/env_dpdk/pci_idxd.o 00:01:48.278 CC lib/env_dpdk/pci_vmd.o 00:01:48.278 CC lib/env_dpdk/sigbus_handler.o 00:01:48.278 CC lib/env_dpdk/pci_dpdk.o 00:01:48.278 CC lib/env_dpdk/pci_event.o 00:01:48.278 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:48.278 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:48.537 LIB libspdk_rdma_provider.a 00:01:48.537 LIB libspdk_conf.a 00:01:48.537 LIB libspdk_rdma_utils.a 00:01:48.537 LIB libspdk_json.a 00:01:48.835 LIB libspdk_idxd.a 00:01:48.835 LIB libspdk_vmd.a 00:01:48.835 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:48.835 CC lib/jsonrpc/jsonrpc_server.o 00:01:48.835 CC lib/jsonrpc/jsonrpc_client.o 00:01:48.835 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:49.099 LIB libspdk_jsonrpc.a 00:01:49.358 CC lib/rpc/rpc.o 00:01:49.358 LIB libspdk_env_dpdk.a 00:01:49.358 LIB libspdk_rpc.a 00:01:49.617 CC lib/trace/trace.o 00:01:49.617 CC lib/trace/trace_flags.o 00:01:49.617 CC lib/trace/trace_rpc.o 00:01:49.617 CC lib/keyring/keyring.o 00:01:49.617 CC lib/notify/notify.o 00:01:49.617 CC lib/keyring/keyring_rpc.o 00:01:49.617 CC lib/notify/notify_rpc.o 00:01:49.877 LIB libspdk_notify.a 00:01:49.877 LIB libspdk_trace.a 00:01:49.877 LIB libspdk_keyring.a 00:01:50.137 CC lib/thread/thread.o 00:01:50.137 CC lib/thread/iobuf.o 00:01:50.137 CC lib/sock/sock.o 00:01:50.137 CC lib/sock/sock_rpc.o 00:01:50.395 LIB libspdk_sock.a 00:01:50.654 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:50.654 CC lib/nvme/nvme_ctrlr.o 00:01:50.654 CC lib/nvme/nvme_ns_cmd.o 00:01:50.654 CC lib/nvme/nvme_fabric.o 00:01:50.654 CC lib/nvme/nvme_pcie.o 00:01:50.654 CC lib/nvme/nvme_ns.o 00:01:50.654 CC lib/nvme/nvme_pcie_common.o 00:01:50.654 CC lib/nvme/nvme_qpair.o 00:01:50.654 CC lib/nvme/nvme.o 00:01:50.654 CC lib/nvme/nvme_quirks.o 00:01:50.654 CC lib/nvme/nvme_transport.o 00:01:50.654 CC lib/nvme/nvme_discovery.o 00:01:50.654 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:50.654 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:50.654 CC lib/nvme/nvme_tcp.o 00:01:50.654 CC lib/nvme/nvme_opal.o 00:01:50.654 CC lib/nvme/nvme_io_msg.o 00:01:50.654 CC lib/nvme/nvme_poll_group.o 00:01:50.654 CC lib/nvme/nvme_zns.o 00:01:50.654 CC lib/nvme/nvme_stubs.o 00:01:50.654 CC lib/nvme/nvme_auth.o 00:01:50.654 CC lib/nvme/nvme_cuse.o 00:01:50.654 CC lib/nvme/nvme_vfio_user.o 00:01:50.654 CC lib/nvme/nvme_rdma.o 00:01:50.912 LIB libspdk_thread.a 00:01:51.169 CC lib/blob/request.o 00:01:51.169 CC lib/blob/zeroes.o 00:01:51.169 CC lib/blob/blobstore.o 00:01:51.169 CC lib/blob/blob_bs_dev.o 00:01:51.169 CC lib/vfu_tgt/tgt_rpc.o 00:01:51.169 CC lib/vfu_tgt/tgt_endpoint.o 00:01:51.169 CC lib/fsdev/fsdev.o 00:01:51.169 CC lib/fsdev/fsdev_io.o 00:01:51.169 CC lib/fsdev/fsdev_rpc.o 00:01:51.169 CC lib/virtio/virtio.o 00:01:51.169 CC lib/virtio/virtio_vfio_user.o 00:01:51.169 CC lib/virtio/virtio_vhost_user.o 00:01:51.169 CC lib/virtio/virtio_pci.o 00:01:51.169 CC lib/accel/accel.o 00:01:51.169 CC lib/accel/accel_rpc.o 00:01:51.169 CC lib/accel/accel_sw.o 00:01:51.169 CC lib/init/json_config.o 00:01:51.169 CC lib/init/subsystem.o 00:01:51.169 CC lib/init/subsystem_rpc.o 00:01:51.169 CC lib/init/rpc.o 00:01:51.428 LIB libspdk_init.a 00:01:51.428 LIB libspdk_virtio.a 00:01:51.428 LIB libspdk_vfu_tgt.a 00:01:51.428 LIB libspdk_fsdev.a 00:01:51.687 CC lib/event/reactor.o 00:01:51.687 CC lib/event/app.o 00:01:51.687 CC lib/event/log_rpc.o 00:01:51.687 CC lib/event/app_rpc.o 00:01:51.687 CC lib/event/scheduler_static.o 00:01:51.945 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:01:51.945 LIB libspdk_event.a 00:01:51.945 LIB libspdk_accel.a 00:01:51.945 LIB libspdk_nvme.a 00:01:52.204 LIB libspdk_fuse_dispatcher.a 00:01:52.204 CC lib/bdev/bdev.o 00:01:52.204 CC lib/bdev/bdev_rpc.o 00:01:52.204 CC lib/bdev/bdev_zone.o 00:01:52.204 CC lib/bdev/part.o 00:01:52.204 CC lib/bdev/scsi_nvme.o 00:01:52.772 LIB libspdk_blob.a 00:01:53.339 CC lib/lvol/lvol.o 00:01:53.340 CC lib/blobfs/blobfs.o 00:01:53.340 CC lib/blobfs/tree.o 00:01:53.599 LIB libspdk_lvol.a 00:01:53.599 LIB libspdk_blobfs.a 00:01:53.858 LIB libspdk_bdev.a 00:01:54.117 CC lib/nvmf/ctrlr.o 00:01:54.117 CC lib/nvmf/ctrlr_discovery.o 00:01:54.117 CC lib/nbd/nbd.o 00:01:54.117 CC lib/nvmf/ctrlr_bdev.o 00:01:54.117 CC lib/nbd/nbd_rpc.o 00:01:54.117 CC lib/nvmf/subsystem.o 00:01:54.118 CC lib/nvmf/transport.o 00:01:54.118 CC lib/nvmf/nvmf_rpc.o 00:01:54.118 CC lib/nvmf/nvmf.o 00:01:54.118 CC lib/nvmf/tcp.o 00:01:54.118 CC lib/nvmf/mdns_server.o 00:01:54.118 CC lib/nvmf/stubs.o 00:01:54.118 CC lib/nvmf/vfio_user.o 00:01:54.118 CC lib/nvmf/rdma.o 00:01:54.118 CC lib/nvmf/auth.o 00:01:54.118 CC lib/scsi/dev.o 00:01:54.118 CC lib/scsi/scsi.o 00:01:54.118 CC lib/scsi/lun.o 00:01:54.118 CC lib/ftl/ftl_layout.o 00:01:54.118 CC lib/ftl/ftl_core.o 00:01:54.118 CC lib/scsi/port.o 00:01:54.118 CC lib/ftl/ftl_init.o 00:01:54.118 CC lib/scsi/scsi_bdev.o 00:01:54.118 CC lib/scsi/scsi_pr.o 00:01:54.118 CC lib/ftl/ftl_debug.o 00:01:54.118 CC lib/ftl/ftl_io.o 00:01:54.118 CC lib/scsi/scsi_rpc.o 00:01:54.376 CC lib/ftl/ftl_sb.o 00:01:54.376 CC lib/scsi/task.o 00:01:54.376 CC lib/ftl/ftl_l2p.o 00:01:54.376 CC lib/ftl/ftl_l2p_flat.o 00:01:54.376 CC lib/ublk/ublk.o 00:01:54.376 CC lib/ftl/ftl_nv_cache.o 00:01:54.376 CC lib/ublk/ublk_rpc.o 00:01:54.376 CC lib/ftl/ftl_band.o 00:01:54.376 CC lib/ftl/ftl_band_ops.o 00:01:54.376 CC lib/ftl/ftl_writer.o 00:01:54.376 CC lib/ftl/ftl_rq.o 00:01:54.376 CC lib/ftl/ftl_reloc.o 00:01:54.376 CC lib/ftl/ftl_l2p_cache.o 00:01:54.376 CC lib/ftl/mngt/ftl_mngt.o 00:01:54.376 CC lib/ftl/ftl_p2l.o 00:01:54.376 CC lib/ftl/ftl_p2l_log.o 00:01:54.376 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:54.376 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:54.376 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:54.376 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:54.376 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:54.376 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:54.376 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:54.376 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:54.376 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:54.376 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:54.376 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:54.376 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:54.376 CC lib/ftl/utils/ftl_md.o 00:01:54.376 CC lib/ftl/utils/ftl_conf.o 00:01:54.376 CC lib/ftl/utils/ftl_property.o 00:01:54.376 CC lib/ftl/utils/ftl_mempool.o 00:01:54.376 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:54.376 CC lib/ftl/utils/ftl_bitmap.o 00:01:54.376 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:54.376 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:54.376 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:54.376 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:54.376 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:54.376 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:54.376 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:54.376 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:54.376 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:54.376 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:54.376 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:01:54.376 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:01:54.376 CC lib/ftl/base/ftl_base_dev.o 00:01:54.376 CC lib/ftl/base/ftl_base_bdev.o 00:01:54.376 CC lib/ftl/ftl_trace.o 00:01:54.635 LIB libspdk_nbd.a 00:01:54.635 LIB libspdk_scsi.a 00:01:54.635 LIB libspdk_ublk.a 00:01:54.895 LIB libspdk_ftl.a 00:01:54.895 CC lib/vhost/vhost.o 00:01:54.895 CC lib/vhost/vhost_rpc.o 00:01:54.895 CC lib/vhost/vhost_scsi.o 00:01:54.895 CC lib/vhost/vhost_blk.o 00:01:54.895 CC lib/vhost/rte_vhost_user.o 00:01:54.895 CC lib/iscsi/conn.o 00:01:54.895 CC lib/iscsi/init_grp.o 00:01:54.895 CC lib/iscsi/iscsi.o 00:01:54.895 CC lib/iscsi/param.o 00:01:54.895 CC lib/iscsi/portal_grp.o 00:01:54.895 CC lib/iscsi/tgt_node.o 00:01:54.895 CC lib/iscsi/iscsi_subsystem.o 00:01:54.895 CC lib/iscsi/iscsi_rpc.o 00:01:54.895 CC lib/iscsi/task.o 00:01:55.462 LIB libspdk_nvmf.a 00:01:55.462 LIB libspdk_vhost.a 00:01:55.722 LIB libspdk_iscsi.a 00:01:56.292 CC module/vfu_device/vfu_virtio.o 00:01:56.292 CC module/vfu_device/vfu_virtio_blk.o 00:01:56.292 CC module/env_dpdk/env_dpdk_rpc.o 00:01:56.292 CC module/vfu_device/vfu_virtio_fs.o 00:01:56.292 CC module/vfu_device/vfu_virtio_scsi.o 00:01:56.292 CC module/vfu_device/vfu_virtio_rpc.o 00:01:56.292 CC module/blob/bdev/blob_bdev.o 00:01:56.292 CC module/accel/ioat/accel_ioat.o 00:01:56.292 CC module/accel/ioat/accel_ioat_rpc.o 00:01:56.292 CC module/keyring/linux/keyring.o 00:01:56.292 LIB libspdk_env_dpdk_rpc.a 00:01:56.292 CC module/keyring/linux/keyring_rpc.o 00:01:56.292 CC module/keyring/file/keyring_rpc.o 00:01:56.292 CC module/keyring/file/keyring.o 00:01:56.292 CC module/sock/posix/posix.o 00:01:56.292 CC module/fsdev/aio/fsdev_aio.o 00:01:56.292 CC module/fsdev/aio/fsdev_aio_rpc.o 00:01:56.292 CC module/fsdev/aio/linux_aio_mgr.o 00:01:56.292 CC module/accel/error/accel_error.o 00:01:56.292 CC module/accel/error/accel_error_rpc.o 00:01:56.292 CC module/accel/dsa/accel_dsa.o 00:01:56.292 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:56.292 CC module/accel/dsa/accel_dsa_rpc.o 00:01:56.292 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:56.292 CC module/scheduler/gscheduler/gscheduler.o 00:01:56.292 CC module/accel/iaa/accel_iaa.o 00:01:56.292 CC module/accel/iaa/accel_iaa_rpc.o 00:01:56.292 LIB libspdk_keyring_linux.a 00:01:56.292 LIB libspdk_keyring_file.a 00:01:56.292 LIB libspdk_accel_ioat.a 00:01:56.292 LIB libspdk_scheduler_gscheduler.a 00:01:56.292 LIB libspdk_scheduler_dpdk_governor.a 00:01:56.292 LIB libspdk_accel_error.a 00:01:56.292 LIB libspdk_blob_bdev.a 00:01:56.292 LIB libspdk_scheduler_dynamic.a 00:01:56.292 LIB libspdk_accel_iaa.a 00:01:56.552 LIB libspdk_accel_dsa.a 00:01:56.552 LIB libspdk_vfu_device.a 00:01:56.552 LIB libspdk_sock_posix.a 00:01:56.552 LIB libspdk_fsdev_aio.a 00:01:56.811 CC module/bdev/lvol/vbdev_lvol.o 00:01:56.811 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:56.811 CC module/bdev/error/vbdev_error.o 00:01:56.811 CC module/bdev/error/vbdev_error_rpc.o 00:01:56.811 CC module/bdev/nvme/bdev_nvme.o 00:01:56.811 CC module/bdev/nvme/bdev_mdns_client.o 00:01:56.811 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:56.811 CC module/bdev/nvme/nvme_rpc.o 00:01:56.811 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:56.811 CC module/bdev/nvme/vbdev_opal.o 00:01:56.811 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:56.811 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:56.811 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:56.811 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:56.811 CC module/bdev/malloc/bdev_malloc.o 00:01:56.811 CC module/bdev/delay/vbdev_delay.o 00:01:56.811 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:56.811 CC module/bdev/split/vbdev_split.o 00:01:56.811 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:56.811 CC module/bdev/split/vbdev_split_rpc.o 00:01:56.811 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:56.811 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:56.811 CC module/bdev/raid/bdev_raid.o 00:01:56.811 CC module/bdev/raid/bdev_raid_rpc.o 00:01:56.811 CC module/bdev/gpt/vbdev_gpt.o 00:01:56.811 CC module/bdev/null/bdev_null.o 00:01:56.811 CC module/bdev/null/bdev_null_rpc.o 00:01:56.811 CC module/bdev/raid/bdev_raid_sb.o 00:01:56.811 CC module/bdev/raid/raid0.o 00:01:56.811 CC module/bdev/passthru/vbdev_passthru.o 00:01:56.811 CC module/bdev/raid/concat.o 00:01:56.811 CC module/bdev/gpt/gpt.o 00:01:56.811 CC module/blobfs/bdev/blobfs_bdev.o 00:01:56.811 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:56.811 CC module/bdev/raid/raid1.o 00:01:56.811 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:56.811 CC module/bdev/aio/bdev_aio.o 00:01:56.811 CC module/bdev/aio/bdev_aio_rpc.o 00:01:56.811 CC module/bdev/iscsi/bdev_iscsi.o 00:01:56.811 CC module/bdev/ftl/bdev_ftl.o 00:01:56.811 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:56.811 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:57.070 LIB libspdk_blobfs_bdev.a 00:01:57.070 LIB libspdk_bdev_split.a 00:01:57.070 LIB libspdk_bdev_error.a 00:01:57.070 LIB libspdk_bdev_gpt.a 00:01:57.070 LIB libspdk_bdev_null.a 00:01:57.070 LIB libspdk_bdev_passthru.a 00:01:57.070 LIB libspdk_bdev_zone_block.a 00:01:57.070 LIB libspdk_bdev_ftl.a 00:01:57.070 LIB libspdk_bdev_aio.a 00:01:57.070 LIB libspdk_bdev_delay.a 00:01:57.070 LIB libspdk_bdev_iscsi.a 00:01:57.070 LIB libspdk_bdev_malloc.a 00:01:57.070 LIB libspdk_bdev_lvol.a 00:01:57.070 LIB libspdk_bdev_virtio.a 00:01:57.328 LIB libspdk_bdev_raid.a 00:01:58.266 LIB libspdk_bdev_nvme.a 00:01:58.525 CC module/event/subsystems/sock/sock.o 00:01:58.525 CC module/event/subsystems/scheduler/scheduler.o 00:01:58.784 CC module/event/subsystems/vmd/vmd.o 00:01:58.784 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:58.784 CC module/event/subsystems/keyring/keyring.o 00:01:58.784 CC module/event/subsystems/iobuf/iobuf.o 00:01:58.784 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:58.784 CC module/event/subsystems/fsdev/fsdev.o 00:01:58.784 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:58.784 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:58.784 LIB libspdk_event_sock.a 00:01:58.784 LIB libspdk_event_keyring.a 00:01:58.784 LIB libspdk_event_scheduler.a 00:01:58.784 LIB libspdk_event_vmd.a 00:01:58.784 LIB libspdk_event_fsdev.a 00:01:58.784 LIB libspdk_event_vfu_tgt.a 00:01:58.784 LIB libspdk_event_vhost_blk.a 00:01:58.784 LIB libspdk_event_iobuf.a 00:01:59.043 CC module/event/subsystems/accel/accel.o 00:01:59.302 LIB libspdk_event_accel.a 00:01:59.561 CC module/event/subsystems/bdev/bdev.o 00:01:59.561 LIB libspdk_event_bdev.a 00:01:59.820 CC module/event/subsystems/nbd/nbd.o 00:01:59.820 CC module/event/subsystems/ublk/ublk.o 00:01:59.820 CC module/event/subsystems/scsi/scsi.o 00:01:59.820 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:59.820 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:00.079 LIB libspdk_event_nbd.a 00:02:00.079 LIB libspdk_event_ublk.a 00:02:00.079 LIB libspdk_event_scsi.a 00:02:00.079 LIB libspdk_event_nvmf.a 00:02:00.337 CC module/event/subsystems/iscsi/iscsi.o 00:02:00.338 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:00.597 LIB libspdk_event_vhost_scsi.a 00:02:00.597 LIB libspdk_event_iscsi.a 00:02:00.858 CC app/trace_record/trace_record.o 00:02:00.858 CC app/spdk_lspci/spdk_lspci.o 00:02:00.858 CXX app/trace/trace.o 00:02:00.858 CC app/spdk_nvme_identify/identify.o 00:02:00.858 CC app/spdk_top/spdk_top.o 00:02:00.858 TEST_HEADER include/spdk/assert.h 00:02:00.858 TEST_HEADER include/spdk/accel.h 00:02:00.858 TEST_HEADER include/spdk/barrier.h 00:02:00.858 TEST_HEADER include/spdk/accel_module.h 00:02:00.858 TEST_HEADER include/spdk/base64.h 00:02:00.858 TEST_HEADER include/spdk/bdev_module.h 00:02:00.858 TEST_HEADER include/spdk/bdev.h 00:02:00.858 TEST_HEADER include/spdk/bdev_zone.h 00:02:00.858 TEST_HEADER include/spdk/blob_bdev.h 00:02:00.858 TEST_HEADER include/spdk/bit_array.h 00:02:00.858 TEST_HEADER include/spdk/bit_pool.h 00:02:00.858 TEST_HEADER include/spdk/conf.h 00:02:00.858 TEST_HEADER include/spdk/blobfs.h 00:02:00.858 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:00.858 TEST_HEADER include/spdk/blob.h 00:02:00.858 CC app/spdk_nvme_discover/discovery_aer.o 00:02:00.858 TEST_HEADER include/spdk/config.h 00:02:00.858 TEST_HEADER include/spdk/cpuset.h 00:02:00.858 CC app/spdk_nvme_perf/perf.o 00:02:00.858 TEST_HEADER include/spdk/crc16.h 00:02:00.858 TEST_HEADER include/spdk/crc32.h 00:02:00.858 TEST_HEADER include/spdk/dif.h 00:02:00.858 TEST_HEADER include/spdk/crc64.h 00:02:00.858 TEST_HEADER include/spdk/endian.h 00:02:00.858 TEST_HEADER include/spdk/dma.h 00:02:00.858 CC app/spdk_dd/spdk_dd.o 00:02:00.859 TEST_HEADER include/spdk/env.h 00:02:00.859 TEST_HEADER include/spdk/event.h 00:02:00.859 TEST_HEADER include/spdk/env_dpdk.h 00:02:00.859 TEST_HEADER include/spdk/fd_group.h 00:02:00.859 CC test/rpc_client/rpc_client_test.o 00:02:00.859 TEST_HEADER include/spdk/fd.h 00:02:00.859 TEST_HEADER include/spdk/file.h 00:02:00.859 TEST_HEADER include/spdk/fsdev_module.h 00:02:00.859 TEST_HEADER include/spdk/fsdev.h 00:02:00.859 TEST_HEADER include/spdk/ftl.h 00:02:00.859 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:00.859 TEST_HEADER include/spdk/gpt_spec.h 00:02:00.859 TEST_HEADER include/spdk/hexlify.h 00:02:00.859 TEST_HEADER include/spdk/histogram_data.h 00:02:00.859 TEST_HEADER include/spdk/idxd_spec.h 00:02:00.859 TEST_HEADER include/spdk/idxd.h 00:02:00.859 TEST_HEADER include/spdk/init.h 00:02:00.859 TEST_HEADER include/spdk/ioat.h 00:02:00.859 TEST_HEADER include/spdk/json.h 00:02:00.859 TEST_HEADER include/spdk/ioat_spec.h 00:02:00.859 TEST_HEADER include/spdk/iscsi_spec.h 00:02:00.859 TEST_HEADER include/spdk/keyring.h 00:02:00.859 TEST_HEADER include/spdk/jsonrpc.h 00:02:00.859 TEST_HEADER include/spdk/keyring_module.h 00:02:00.859 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:00.859 TEST_HEADER include/spdk/likely.h 00:02:00.859 TEST_HEADER include/spdk/md5.h 00:02:00.859 TEST_HEADER include/spdk/lvol.h 00:02:00.859 CC app/iscsi_tgt/iscsi_tgt.o 00:02:00.859 TEST_HEADER include/spdk/log.h 00:02:00.859 TEST_HEADER include/spdk/memory.h 00:02:00.859 TEST_HEADER include/spdk/mmio.h 00:02:00.859 TEST_HEADER include/spdk/nbd.h 00:02:00.859 CC app/nvmf_tgt/nvmf_main.o 00:02:00.859 TEST_HEADER include/spdk/notify.h 00:02:00.859 TEST_HEADER include/spdk/nvme.h 00:02:00.859 TEST_HEADER include/spdk/nvme_intel.h 00:02:00.859 TEST_HEADER include/spdk/net.h 00:02:00.859 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:00.859 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:00.859 TEST_HEADER include/spdk/nvme_spec.h 00:02:00.859 TEST_HEADER include/spdk/nvmf_spec.h 00:02:00.859 TEST_HEADER include/spdk/nvmf.h 00:02:00.859 TEST_HEADER include/spdk/nvme_zns.h 00:02:00.859 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:00.859 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:00.859 TEST_HEADER include/spdk/nvmf_transport.h 00:02:00.859 TEST_HEADER include/spdk/opal_spec.h 00:02:00.859 TEST_HEADER include/spdk/pci_ids.h 00:02:00.859 TEST_HEADER include/spdk/pipe.h 00:02:00.859 TEST_HEADER include/spdk/queue.h 00:02:00.859 TEST_HEADER include/spdk/opal.h 00:02:00.859 TEST_HEADER include/spdk/scheduler.h 00:02:00.859 TEST_HEADER include/spdk/scsi.h 00:02:00.859 TEST_HEADER include/spdk/reduce.h 00:02:00.859 TEST_HEADER include/spdk/scsi_spec.h 00:02:00.859 TEST_HEADER include/spdk/sock.h 00:02:00.859 TEST_HEADER include/spdk/rpc.h 00:02:00.859 TEST_HEADER include/spdk/stdinc.h 00:02:00.859 TEST_HEADER include/spdk/thread.h 00:02:00.859 TEST_HEADER include/spdk/trace.h 00:02:00.859 TEST_HEADER include/spdk/string.h 00:02:00.859 TEST_HEADER include/spdk/trace_parser.h 00:02:00.859 TEST_HEADER include/spdk/tree.h 00:02:00.859 TEST_HEADER include/spdk/ublk.h 00:02:00.859 TEST_HEADER include/spdk/util.h 00:02:00.859 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:00.859 TEST_HEADER include/spdk/uuid.h 00:02:00.859 TEST_HEADER include/spdk/version.h 00:02:00.859 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:00.859 TEST_HEADER include/spdk/vhost.h 00:02:00.859 TEST_HEADER include/spdk/xor.h 00:02:00.859 TEST_HEADER include/spdk/vmd.h 00:02:00.859 TEST_HEADER include/spdk/zipf.h 00:02:00.859 CXX test/cpp_headers/accel_module.o 00:02:00.859 CXX test/cpp_headers/assert.o 00:02:00.859 CXX test/cpp_headers/accel.o 00:02:00.859 CXX test/cpp_headers/base64.o 00:02:00.859 CXX test/cpp_headers/bdev_module.o 00:02:00.859 CXX test/cpp_headers/barrier.o 00:02:00.859 CXX test/cpp_headers/bdev.o 00:02:00.859 CXX test/cpp_headers/bdev_zone.o 00:02:00.859 CXX test/cpp_headers/bit_array.o 00:02:00.859 CC app/spdk_tgt/spdk_tgt.o 00:02:00.859 CXX test/cpp_headers/bit_pool.o 00:02:00.859 CXX test/cpp_headers/blob_bdev.o 00:02:00.859 CXX test/cpp_headers/blobfs.o 00:02:00.859 CXX test/cpp_headers/blobfs_bdev.o 00:02:00.859 CXX test/cpp_headers/config.o 00:02:00.859 CXX test/cpp_headers/blob.o 00:02:00.859 CXX test/cpp_headers/cpuset.o 00:02:00.859 CXX test/cpp_headers/crc64.o 00:02:00.859 CXX test/cpp_headers/conf.o 00:02:00.859 CXX test/cpp_headers/crc16.o 00:02:00.859 CXX test/cpp_headers/dif.o 00:02:00.859 CXX test/cpp_headers/crc32.o 00:02:00.859 CXX test/cpp_headers/dma.o 00:02:00.859 CXX test/cpp_headers/env.o 00:02:00.859 CXX test/cpp_headers/endian.o 00:02:00.859 CXX test/cpp_headers/env_dpdk.o 00:02:00.859 CXX test/cpp_headers/fd.o 00:02:00.859 CXX test/cpp_headers/fsdev.o 00:02:00.859 CXX test/cpp_headers/file.o 00:02:00.859 CXX test/cpp_headers/event.o 00:02:00.859 CXX test/cpp_headers/fd_group.o 00:02:00.859 CXX test/cpp_headers/fsdev_module.o 00:02:00.859 CXX test/cpp_headers/gpt_spec.o 00:02:00.859 CXX test/cpp_headers/ftl.o 00:02:00.859 CXX test/cpp_headers/fuse_dispatcher.o 00:02:00.859 CXX test/cpp_headers/hexlify.o 00:02:00.859 CXX test/cpp_headers/histogram_data.o 00:02:00.859 CXX test/cpp_headers/idxd_spec.o 00:02:00.859 CXX test/cpp_headers/idxd.o 00:02:00.859 CXX test/cpp_headers/ioat.o 00:02:00.859 CXX test/cpp_headers/init.o 00:02:00.859 CC examples/ioat/perf/perf.o 00:02:00.859 CXX test/cpp_headers/ioat_spec.o 00:02:00.859 CXX test/cpp_headers/iscsi_spec.o 00:02:00.859 CXX test/cpp_headers/jsonrpc.o 00:02:00.859 CXX test/cpp_headers/json.o 00:02:00.859 CXX test/cpp_headers/keyring.o 00:02:00.859 CXX test/cpp_headers/likely.o 00:02:00.859 CXX test/cpp_headers/keyring_module.o 00:02:00.859 CXX test/cpp_headers/log.o 00:02:00.859 CXX test/cpp_headers/lvol.o 00:02:00.859 CXX test/cpp_headers/md5.o 00:02:00.859 CXX test/cpp_headers/memory.o 00:02:00.859 CXX test/cpp_headers/mmio.o 00:02:00.859 CXX test/cpp_headers/nbd.o 00:02:00.859 CXX test/cpp_headers/net.o 00:02:00.859 CXX test/cpp_headers/notify.o 00:02:00.859 CC test/thread/poller_perf/poller_perf.o 00:02:00.859 CXX test/cpp_headers/nvme_intel.o 00:02:00.859 LINK spdk_lspci 00:02:00.859 CXX test/cpp_headers/nvme.o 00:02:00.859 CXX test/cpp_headers/nvme_ocssd.o 00:02:00.859 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:00.859 CXX test/cpp_headers/nvme_spec.o 00:02:00.859 CXX test/cpp_headers/nvmf_cmd.o 00:02:00.859 CXX test/cpp_headers/nvme_zns.o 00:02:00.859 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:00.859 CXX test/cpp_headers/nvmf.o 00:02:00.859 CC examples/ioat/verify/verify.o 00:02:00.859 CXX test/cpp_headers/nvmf_spec.o 00:02:00.859 CXX test/cpp_headers/nvmf_transport.o 00:02:00.859 CXX test/cpp_headers/opal_spec.o 00:02:00.859 CXX test/cpp_headers/opal.o 00:02:00.859 CXX test/cpp_headers/pipe.o 00:02:00.859 CXX test/cpp_headers/pci_ids.o 00:02:00.859 CXX test/cpp_headers/queue.o 00:02:00.859 CC app/fio/nvme/fio_plugin.o 00:02:00.859 CXX test/cpp_headers/reduce.o 00:02:00.859 CXX test/cpp_headers/rpc.o 00:02:00.859 CC test/env/memory/memory_ut.o 00:02:00.859 CXX test/cpp_headers/scheduler.o 00:02:00.859 CXX test/cpp_headers/sock.o 00:02:00.859 CXX test/cpp_headers/scsi_spec.o 00:02:00.859 CXX test/cpp_headers/scsi.o 00:02:00.859 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:00.859 CXX test/cpp_headers/stdinc.o 00:02:00.859 CC examples/util/zipf/zipf.o 00:02:00.859 CC test/app/stub/stub.o 00:02:00.859 CC test/thread/lock/spdk_lock.o 00:02:00.859 CC test/env/vtophys/vtophys.o 00:02:00.859 CC test/app/histogram_perf/histogram_perf.o 00:02:00.859 CC test/env/pci/pci_ut.o 00:02:00.859 CC test/app/jsoncat/jsoncat.o 00:02:00.859 CXX test/cpp_headers/string.o 00:02:00.859 CXX test/cpp_headers/thread.o 00:02:01.121 CC test/app/bdev_svc/bdev_svc.o 00:02:01.121 LINK rpc_client_test 00:02:01.121 CC test/dma/test_dma/test_dma.o 00:02:01.121 LINK spdk_nvme_discover 00:02:01.121 CC app/fio/bdev/fio_plugin.o 00:02:01.121 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:01.121 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:01.121 LINK spdk_trace_record 00:02:01.121 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:01.121 CC test/env/mem_callbacks/mem_callbacks.o 00:02:01.121 LINK interrupt_tgt 00:02:01.121 CXX test/cpp_headers/trace.o 00:02:01.121 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:01.121 LINK nvmf_tgt 00:02:01.121 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:01.121 LINK iscsi_tgt 00:02:01.121 CXX test/cpp_headers/trace_parser.o 00:02:01.121 CXX test/cpp_headers/tree.o 00:02:01.121 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:01.121 CXX test/cpp_headers/ublk.o 00:02:01.121 CXX test/cpp_headers/util.o 00:02:01.121 LINK poller_perf 00:02:01.121 CXX test/cpp_headers/uuid.o 00:02:01.121 CXX test/cpp_headers/version.o 00:02:01.121 CXX test/cpp_headers/vfio_user_pci.o 00:02:01.121 CXX test/cpp_headers/vfio_user_spec.o 00:02:01.121 CXX test/cpp_headers/vhost.o 00:02:01.121 CXX test/cpp_headers/vmd.o 00:02:01.121 CXX test/cpp_headers/xor.o 00:02:01.121 CXX test/cpp_headers/zipf.o 00:02:01.121 LINK vtophys 00:02:01.121 LINK zipf 00:02:01.121 LINK histogram_perf 00:02:01.121 LINK jsoncat 00:02:01.121 LINK env_dpdk_post_init 00:02:01.122 LINK stub 00:02:01.122 LINK spdk_tgt 00:02:01.122 LINK ioat_perf 00:02:01.122 LINK verify 00:02:01.122 LINK bdev_svc 00:02:01.380 LINK spdk_trace 00:02:01.380 LINK spdk_dd 00:02:01.380 LINK pci_ut 00:02:01.380 LINK nvme_fuzz 00:02:01.380 LINK llvm_vfio_fuzz 00:02:01.380 LINK spdk_nvme_identify 00:02:01.380 LINK test_dma 00:02:01.380 LINK vhost_fuzz 00:02:01.380 LINK spdk_bdev 00:02:01.380 LINK spdk_nvme 00:02:01.380 LINK spdk_nvme_perf 00:02:01.638 LINK llvm_nvme_fuzz 00:02:01.638 LINK mem_callbacks 00:02:01.638 LINK spdk_top 00:02:01.638 CC examples/idxd/perf/perf.o 00:02:01.638 CC examples/vmd/lsvmd/lsvmd.o 00:02:01.638 CC examples/sock/hello_world/hello_sock.o 00:02:01.638 CC examples/vmd/led/led.o 00:02:01.638 CC app/vhost/vhost.o 00:02:01.896 CC examples/thread/thread/thread_ex.o 00:02:01.896 LINK lsvmd 00:02:01.896 LINK memory_ut 00:02:01.896 LINK led 00:02:01.896 LINK hello_sock 00:02:01.896 LINK vhost 00:02:01.896 LINK idxd_perf 00:02:01.896 LINK spdk_lock 00:02:01.896 LINK thread 00:02:02.154 LINK iscsi_fuzz 00:02:02.720 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:02.720 CC examples/nvme/hello_world/hello_world.o 00:02:02.720 CC examples/nvme/reconnect/reconnect.o 00:02:02.720 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:02.720 CC examples/nvme/hotplug/hotplug.o 00:02:02.720 CC examples/nvme/arbitration/arbitration.o 00:02:02.720 CC examples/nvme/abort/abort.o 00:02:02.720 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:02.720 CC test/event/reactor/reactor.o 00:02:02.720 CC test/event/event_perf/event_perf.o 00:02:02.720 CC test/event/reactor_perf/reactor_perf.o 00:02:02.720 CC test/event/app_repeat/app_repeat.o 00:02:02.720 CC test/event/scheduler/scheduler.o 00:02:02.720 LINK reactor 00:02:02.720 LINK pmr_persistence 00:02:02.720 LINK event_perf 00:02:02.720 LINK reactor_perf 00:02:02.720 LINK cmb_copy 00:02:02.720 LINK hello_world 00:02:02.720 LINK hotplug 00:02:02.720 LINK app_repeat 00:02:02.720 LINK reconnect 00:02:02.720 LINK arbitration 00:02:02.720 LINK abort 00:02:02.720 LINK scheduler 00:02:02.720 LINK nvme_manage 00:02:02.978 CC test/nvme/reset/reset.o 00:02:02.978 CC test/nvme/sgl/sgl.o 00:02:02.978 CC test/nvme/simple_copy/simple_copy.o 00:02:02.978 CC test/blobfs/mkfs/mkfs.o 00:02:02.978 CC test/nvme/fused_ordering/fused_ordering.o 00:02:02.978 CC test/nvme/err_injection/err_injection.o 00:02:02.978 CC test/nvme/fdp/fdp.o 00:02:02.978 CC test/nvme/compliance/nvme_compliance.o 00:02:02.978 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:02.978 CC test/nvme/startup/startup.o 00:02:02.978 CC test/nvme/e2edp/nvme_dp.o 00:02:02.978 CC test/nvme/overhead/overhead.o 00:02:02.978 CC test/nvme/boot_partition/boot_partition.o 00:02:02.978 CC test/nvme/cuse/cuse.o 00:02:02.978 CC test/nvme/aer/aer.o 00:02:02.978 CC test/nvme/connect_stress/connect_stress.o 00:02:02.978 CC test/nvme/reserve/reserve.o 00:02:02.978 CC test/accel/dif/dif.o 00:02:03.239 CC test/lvol/esnap/esnap.o 00:02:03.239 LINK boot_partition 00:02:03.239 LINK err_injection 00:02:03.239 LINK fused_ordering 00:02:03.239 LINK startup 00:02:03.239 LINK doorbell_aers 00:02:03.239 LINK connect_stress 00:02:03.239 LINK simple_copy 00:02:03.239 LINK reset 00:02:03.239 LINK mkfs 00:02:03.239 LINK sgl 00:02:03.239 LINK reserve 00:02:03.239 LINK aer 00:02:03.239 LINK fdp 00:02:03.239 LINK nvme_dp 00:02:03.239 LINK overhead 00:02:03.239 LINK nvme_compliance 00:02:03.499 LINK dif 00:02:03.759 CC examples/accel/perf/accel_perf.o 00:02:03.759 CC examples/blob/hello_world/hello_blob.o 00:02:03.759 CC examples/blob/cli/blobcli.o 00:02:03.759 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:03.759 LINK hello_blob 00:02:03.759 LINK cuse 00:02:04.018 LINK hello_fsdev 00:02:04.018 LINK accel_perf 00:02:04.018 LINK blobcli 00:02:04.588 CC examples/bdev/hello_world/hello_bdev.o 00:02:04.588 CC examples/bdev/bdevperf/bdevperf.o 00:02:04.847 LINK hello_bdev 00:02:05.107 CC test/bdev/bdevio/bdevio.o 00:02:05.107 LINK bdevperf 00:02:05.366 LINK bdevio 00:02:06.302 LINK esnap 00:02:06.562 CC examples/nvmf/nvmf/nvmf.o 00:02:06.821 LINK nvmf 00:02:08.201 00:02:08.201 real 0m44.895s 00:02:08.201 user 6m16.045s 00:02:08.201 sys 2m27.899s 00:02:08.201 13:10:15 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:08.201 13:10:15 make -- common/autotest_common.sh@10 -- $ set +x 00:02:08.201 ************************************ 00:02:08.201 END TEST make 00:02:08.201 ************************************ 00:02:08.201 13:10:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:08.201 13:10:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:08.201 13:10:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:08.201 13:10:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.201 13:10:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:08.201 13:10:15 -- pm/common@44 -- $ pid=3714212 00:02:08.201 13:10:15 -- pm/common@50 -- $ kill -TERM 3714212 00:02:08.201 13:10:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.201 13:10:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:08.201 13:10:15 -- pm/common@44 -- $ pid=3714214 00:02:08.201 13:10:15 -- pm/common@50 -- $ kill -TERM 3714214 00:02:08.201 13:10:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.201 13:10:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:08.201 13:10:15 -- pm/common@44 -- $ pid=3714216 00:02:08.201 13:10:15 -- pm/common@50 -- $ kill -TERM 3714216 00:02:08.201 13:10:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.201 13:10:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:08.201 13:10:15 -- pm/common@44 -- $ pid=3714244 00:02:08.201 13:10:15 -- pm/common@50 -- $ sudo -E kill -TERM 3714244 00:02:08.201 13:10:16 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:08.201 13:10:16 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:08.201 13:10:16 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:08.201 13:10:16 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:08.201 13:10:16 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:08.201 13:10:16 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:08.201 13:10:16 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:08.201 13:10:16 -- scripts/common.sh@336 -- # IFS=.-: 00:02:08.201 13:10:16 -- scripts/common.sh@336 -- # read -ra ver1 00:02:08.201 13:10:16 -- scripts/common.sh@337 -- # IFS=.-: 00:02:08.201 13:10:16 -- scripts/common.sh@337 -- # read -ra ver2 00:02:08.201 13:10:16 -- scripts/common.sh@338 -- # local 'op=<' 00:02:08.201 13:10:16 -- scripts/common.sh@340 -- # ver1_l=2 00:02:08.201 13:10:16 -- scripts/common.sh@341 -- # ver2_l=1 00:02:08.201 13:10:16 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:08.201 13:10:16 -- scripts/common.sh@344 -- # case "$op" in 00:02:08.201 13:10:16 -- scripts/common.sh@345 -- # : 1 00:02:08.201 13:10:16 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:08.201 13:10:16 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:08.201 13:10:16 -- scripts/common.sh@365 -- # decimal 1 00:02:08.201 13:10:16 -- scripts/common.sh@353 -- # local d=1 00:02:08.201 13:10:16 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:08.201 13:10:16 -- scripts/common.sh@355 -- # echo 1 00:02:08.201 13:10:16 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:08.201 13:10:16 -- scripts/common.sh@366 -- # decimal 2 00:02:08.201 13:10:16 -- scripts/common.sh@353 -- # local d=2 00:02:08.201 13:10:16 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:08.201 13:10:16 -- scripts/common.sh@355 -- # echo 2 00:02:08.201 13:10:16 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:08.201 13:10:16 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:08.201 13:10:16 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:08.201 13:10:16 -- scripts/common.sh@368 -- # return 0 00:02:08.201 13:10:16 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:08.201 13:10:16 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:08.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:08.201 --rc genhtml_branch_coverage=1 00:02:08.201 --rc genhtml_function_coverage=1 00:02:08.201 --rc genhtml_legend=1 00:02:08.201 --rc geninfo_all_blocks=1 00:02:08.201 --rc geninfo_unexecuted_blocks=1 00:02:08.201 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:08.201 ' 00:02:08.201 13:10:16 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:08.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:08.201 --rc genhtml_branch_coverage=1 00:02:08.201 --rc genhtml_function_coverage=1 00:02:08.201 --rc genhtml_legend=1 00:02:08.201 --rc geninfo_all_blocks=1 00:02:08.201 --rc geninfo_unexecuted_blocks=1 00:02:08.201 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:08.201 ' 00:02:08.201 13:10:16 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:08.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:08.201 --rc genhtml_branch_coverage=1 00:02:08.201 --rc genhtml_function_coverage=1 00:02:08.201 --rc genhtml_legend=1 00:02:08.201 --rc geninfo_all_blocks=1 00:02:08.201 --rc geninfo_unexecuted_blocks=1 00:02:08.201 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:08.201 ' 00:02:08.201 13:10:16 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:08.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:08.201 --rc genhtml_branch_coverage=1 00:02:08.201 --rc genhtml_function_coverage=1 00:02:08.201 --rc genhtml_legend=1 00:02:08.201 --rc geninfo_all_blocks=1 00:02:08.201 --rc geninfo_unexecuted_blocks=1 00:02:08.201 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:08.201 ' 00:02:08.201 13:10:16 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:08.201 13:10:16 -- nvmf/common.sh@7 -- # uname -s 00:02:08.201 13:10:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:08.201 13:10:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:08.201 13:10:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:08.201 13:10:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:08.201 13:10:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:08.201 13:10:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:08.202 13:10:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:08.202 13:10:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:08.202 13:10:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:08.202 13:10:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:08.202 13:10:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:02:08.202 13:10:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:02:08.202 13:10:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:08.202 13:10:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:08.202 13:10:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:08.202 13:10:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:08.202 13:10:16 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:08.202 13:10:16 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:08.202 13:10:16 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:08.202 13:10:16 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:08.202 13:10:16 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:08.202 13:10:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.202 13:10:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.202 13:10:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.202 13:10:16 -- paths/export.sh@5 -- # export PATH 00:02:08.202 13:10:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.202 13:10:16 -- nvmf/common.sh@51 -- # : 0 00:02:08.202 13:10:16 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:08.202 13:10:16 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:08.202 13:10:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:08.202 13:10:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:08.202 13:10:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:08.202 13:10:16 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:08.202 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:08.202 13:10:16 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:08.202 13:10:16 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:08.202 13:10:16 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:08.202 13:10:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:08.202 13:10:16 -- spdk/autotest.sh@32 -- # uname -s 00:02:08.202 13:10:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:08.202 13:10:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:08.202 13:10:16 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:08.202 13:10:16 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:08.202 13:10:16 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:08.202 13:10:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:08.202 13:10:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:08.202 13:10:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:08.202 13:10:16 -- spdk/autotest.sh@48 -- # udevadm_pid=3777816 00:02:08.202 13:10:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:08.202 13:10:16 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:08.202 13:10:16 -- pm/common@17 -- # local monitor 00:02:08.202 13:10:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.202 13:10:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.202 13:10:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.202 13:10:16 -- pm/common@21 -- # date +%s 00:02:08.202 13:10:16 -- pm/common@21 -- # date +%s 00:02:08.202 13:10:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.202 13:10:16 -- pm/common@25 -- # sleep 1 00:02:08.202 13:10:16 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729163416 00:02:08.202 13:10:16 -- pm/common@21 -- # date +%s 00:02:08.202 13:10:16 -- pm/common@21 -- # date +%s 00:02:08.202 13:10:16 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729163416 00:02:08.202 13:10:16 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729163416 00:02:08.202 13:10:16 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729163416 00:02:08.462 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729163416_collect-vmstat.pm.log 00:02:08.462 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729163416_collect-cpu-load.pm.log 00:02:08.462 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729163416_collect-cpu-temp.pm.log 00:02:08.462 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729163416_collect-bmc-pm.bmc.pm.log 00:02:09.410 13:10:17 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:09.410 13:10:17 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:09.410 13:10:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:09.410 13:10:17 -- common/autotest_common.sh@10 -- # set +x 00:02:09.410 13:10:17 -- spdk/autotest.sh@59 -- # create_test_list 00:02:09.410 13:10:17 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:09.410 13:10:17 -- common/autotest_common.sh@10 -- # set +x 00:02:09.410 13:10:17 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:09.410 13:10:17 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:09.410 13:10:17 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:09.410 13:10:17 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:09.410 13:10:17 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:09.410 13:10:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:09.410 13:10:17 -- common/autotest_common.sh@1455 -- # uname 00:02:09.410 13:10:17 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:09.410 13:10:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:09.410 13:10:17 -- common/autotest_common.sh@1475 -- # uname 00:02:09.410 13:10:17 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:09.410 13:10:17 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:09.410 13:10:17 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:02:09.410 lcov: LCOV version 1.15 00:02:09.410 13:10:17 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:02:17.539 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:18.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:02:26.227 13:10:32 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:26.227 13:10:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:26.227 13:10:32 -- common/autotest_common.sh@10 -- # set +x 00:02:26.227 13:10:32 -- spdk/autotest.sh@78 -- # rm -f 00:02:26.227 13:10:32 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:28.213 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:28.213 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:28.213 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:28.213 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:28.213 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:28.471 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:28.471 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:28.471 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:28.472 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:28.472 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:28.472 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:28.472 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:28.472 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:28.472 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:28.731 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:28.731 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:28.731 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:28.731 13:10:36 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:28.731 13:10:36 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:28.731 13:10:36 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:28.731 13:10:36 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:28.731 13:10:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:28.731 13:10:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:28.731 13:10:36 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:28.731 13:10:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:28.731 13:10:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:28.731 13:10:36 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:28.731 13:10:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:28.731 13:10:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:28.731 13:10:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:28.731 13:10:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:28.731 13:10:36 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:28.731 No valid GPT data, bailing 00:02:28.731 13:10:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:28.731 13:10:36 -- scripts/common.sh@394 -- # pt= 00:02:28.731 13:10:36 -- scripts/common.sh@395 -- # return 1 00:02:28.731 13:10:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:28.731 1+0 records in 00:02:28.731 1+0 records out 00:02:28.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00183693 s, 571 MB/s 00:02:28.731 13:10:36 -- spdk/autotest.sh@105 -- # sync 00:02:28.731 13:10:36 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:28.731 13:10:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:28.731 13:10:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:36.852 13:10:44 -- spdk/autotest.sh@111 -- # uname -s 00:02:36.852 13:10:44 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:36.852 13:10:44 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:02:36.852 13:10:44 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:36.852 13:10:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:36.852 13:10:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:36.852 13:10:44 -- common/autotest_common.sh@10 -- # set +x 00:02:36.852 ************************************ 00:02:36.852 START TEST setup.sh 00:02:36.852 ************************************ 00:02:36.852 13:10:44 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:36.852 * Looking for test storage... 00:02:36.852 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:36.852 13:10:44 setup.sh -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:36.852 13:10:44 setup.sh -- common/autotest_common.sh@1691 -- # lcov --version 00:02:36.852 13:10:44 setup.sh -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:36.852 13:10:44 setup.sh -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@345 -- # : 1 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@353 -- # local d=1 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@355 -- # echo 1 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@353 -- # local d=2 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@355 -- # echo 2 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:36.852 13:10:44 setup.sh -- scripts/common.sh@368 -- # return 0 00:02:36.852 13:10:44 setup.sh -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:36.852 13:10:44 setup.sh -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:36.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:36.852 --rc genhtml_branch_coverage=1 00:02:36.852 --rc genhtml_function_coverage=1 00:02:36.852 --rc genhtml_legend=1 00:02:36.852 --rc geninfo_all_blocks=1 00:02:36.852 --rc geninfo_unexecuted_blocks=1 00:02:36.852 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:36.852 ' 00:02:36.852 13:10:44 setup.sh -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:36.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:36.852 --rc genhtml_branch_coverage=1 00:02:36.852 --rc genhtml_function_coverage=1 00:02:36.852 --rc genhtml_legend=1 00:02:36.852 --rc geninfo_all_blocks=1 00:02:36.852 --rc geninfo_unexecuted_blocks=1 00:02:36.852 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:36.852 ' 00:02:36.852 13:10:44 setup.sh -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:36.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:36.852 --rc genhtml_branch_coverage=1 00:02:36.852 --rc genhtml_function_coverage=1 00:02:36.852 --rc genhtml_legend=1 00:02:36.852 --rc geninfo_all_blocks=1 00:02:36.852 --rc geninfo_unexecuted_blocks=1 00:02:36.852 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:36.852 ' 00:02:36.852 13:10:44 setup.sh -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:36.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:36.852 --rc genhtml_branch_coverage=1 00:02:36.853 --rc genhtml_function_coverage=1 00:02:36.853 --rc genhtml_legend=1 00:02:36.853 --rc geninfo_all_blocks=1 00:02:36.853 --rc geninfo_unexecuted_blocks=1 00:02:36.853 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:36.853 ' 00:02:36.853 13:10:44 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:36.853 13:10:44 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:36.853 13:10:44 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:36.853 13:10:44 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:36.853 13:10:44 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:36.853 13:10:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:36.853 ************************************ 00:02:36.853 START TEST acl 00:02:36.853 ************************************ 00:02:36.853 13:10:44 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:36.853 * Looking for test storage... 00:02:36.853 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:36.853 13:10:44 setup.sh.acl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:36.853 13:10:44 setup.sh.acl -- common/autotest_common.sh@1691 -- # lcov --version 00:02:36.853 13:10:44 setup.sh.acl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:36.853 13:10:44 setup.sh.acl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:36.853 13:10:44 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:02:36.853 13:10:44 setup.sh.acl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:36.853 13:10:44 setup.sh.acl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:36.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:36.853 --rc genhtml_branch_coverage=1 00:02:36.853 --rc genhtml_function_coverage=1 00:02:36.853 --rc genhtml_legend=1 00:02:36.853 --rc geninfo_all_blocks=1 00:02:36.853 --rc geninfo_unexecuted_blocks=1 00:02:36.853 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:36.853 ' 00:02:36.853 13:10:44 setup.sh.acl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:36.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:36.853 --rc genhtml_branch_coverage=1 00:02:36.853 --rc genhtml_function_coverage=1 00:02:36.853 --rc genhtml_legend=1 00:02:36.853 --rc geninfo_all_blocks=1 00:02:36.853 --rc geninfo_unexecuted_blocks=1 00:02:36.853 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:36.853 ' 00:02:36.853 13:10:44 setup.sh.acl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:36.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:36.853 --rc genhtml_branch_coverage=1 00:02:36.853 --rc genhtml_function_coverage=1 00:02:36.853 --rc genhtml_legend=1 00:02:36.853 --rc geninfo_all_blocks=1 00:02:36.853 --rc geninfo_unexecuted_blocks=1 00:02:36.853 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:36.853 ' 00:02:36.853 13:10:44 setup.sh.acl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:36.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:36.853 --rc genhtml_branch_coverage=1 00:02:36.853 --rc genhtml_function_coverage=1 00:02:36.853 --rc genhtml_legend=1 00:02:36.853 --rc geninfo_all_blocks=1 00:02:36.853 --rc geninfo_unexecuted_blocks=1 00:02:36.853 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:36.853 ' 00:02:36.853 13:10:44 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:36.853 13:10:44 setup.sh.acl -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:36.853 13:10:44 setup.sh.acl -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:36.853 13:10:44 setup.sh.acl -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:36.853 13:10:44 setup.sh.acl -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:36.853 13:10:44 setup.sh.acl -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:36.853 13:10:44 setup.sh.acl -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:36.853 13:10:44 setup.sh.acl -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:36.853 13:10:44 setup.sh.acl -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:36.853 13:10:44 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:36.853 13:10:44 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:36.853 13:10:44 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:36.853 13:10:44 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:36.853 13:10:44 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:36.853 13:10:44 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:36.853 13:10:44 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:41.067 13:10:48 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:41.067 13:10:48 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:41.067 13:10:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.067 13:10:48 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:41.067 13:10:48 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.067 13:10:48 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:43.600 Hugepages 00:02:43.600 node hugesize free / total 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.600 00:02:43.600 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.600 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.859 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:43.859 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.859 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.859 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.859 13:10:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:43.859 13:10:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:43.859 13:10:51 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:43.859 13:10:51 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:43.859 13:10:51 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:43.859 13:10:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.859 13:10:51 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:43.859 13:10:51 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:43.859 13:10:51 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:43.859 13:10:51 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:43.859 13:10:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:43.859 ************************************ 00:02:43.859 START TEST denied 00:02:43.859 ************************************ 00:02:43.859 13:10:51 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:02:43.859 13:10:51 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:43.859 13:10:51 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:43.859 13:10:51 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:43.859 13:10:51 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.859 13:10:51 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:48.054 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:02:48.054 13:10:55 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:02:48.054 13:10:55 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:48.054 13:10:55 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:48.054 13:10:55 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:02:48.054 13:10:55 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:02:48.054 13:10:55 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:48.054 13:10:55 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:48.054 13:10:55 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:48.054 13:10:55 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:48.055 13:10:55 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:52.251 00:02:52.251 real 0m8.081s 00:02:52.251 user 0m2.605s 00:02:52.251 sys 0m4.805s 00:02:52.251 13:10:59 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:52.251 13:10:59 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:52.251 ************************************ 00:02:52.251 END TEST denied 00:02:52.251 ************************************ 00:02:52.251 13:10:59 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:52.251 13:10:59 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:52.251 13:10:59 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:52.251 13:10:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:52.251 ************************************ 00:02:52.251 START TEST allowed 00:02:52.251 ************************************ 00:02:52.251 13:10:59 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:02:52.251 13:10:59 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:02:52.251 13:10:59 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:52.251 13:10:59 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:02:52.251 13:10:59 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.251 13:10:59 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:57.530 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:57.530 13:11:04 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:57.530 13:11:04 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:57.530 13:11:04 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:57.530 13:11:04 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:57.531 13:11:04 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:00.829 00:03:00.829 real 0m8.319s 00:03:00.829 user 0m2.321s 00:03:00.829 sys 0m4.624s 00:03:00.829 13:11:08 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:00.829 13:11:08 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:00.829 ************************************ 00:03:00.829 END TEST allowed 00:03:00.829 ************************************ 00:03:00.829 00:03:00.829 real 0m24.020s 00:03:00.829 user 0m7.679s 00:03:00.829 sys 0m14.576s 00:03:00.829 13:11:08 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:00.829 13:11:08 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:00.829 ************************************ 00:03:00.829 END TEST acl 00:03:00.829 ************************************ 00:03:00.829 13:11:08 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:00.829 13:11:08 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:00.829 13:11:08 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:00.829 13:11:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:00.829 ************************************ 00:03:00.829 START TEST hugepages 00:03:00.829 ************************************ 00:03:00.829 13:11:08 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:00.829 * Looking for test storage... 00:03:00.829 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:00.829 13:11:08 setup.sh.hugepages -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:00.829 13:11:08 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # lcov --version 00:03:00.829 13:11:08 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:00.829 13:11:08 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:00.829 13:11:08 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:03:00.829 13:11:08 setup.sh.hugepages -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:00.829 13:11:08 setup.sh.hugepages -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:00.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.829 --rc genhtml_branch_coverage=1 00:03:00.829 --rc genhtml_function_coverage=1 00:03:00.829 --rc genhtml_legend=1 00:03:00.829 --rc geninfo_all_blocks=1 00:03:00.829 --rc geninfo_unexecuted_blocks=1 00:03:00.829 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:00.829 ' 00:03:00.829 13:11:08 setup.sh.hugepages -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:00.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.829 --rc genhtml_branch_coverage=1 00:03:00.829 --rc genhtml_function_coverage=1 00:03:00.829 --rc genhtml_legend=1 00:03:00.829 --rc geninfo_all_blocks=1 00:03:00.829 --rc geninfo_unexecuted_blocks=1 00:03:00.829 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:00.829 ' 00:03:00.829 13:11:08 setup.sh.hugepages -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:00.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.829 --rc genhtml_branch_coverage=1 00:03:00.829 --rc genhtml_function_coverage=1 00:03:00.829 --rc genhtml_legend=1 00:03:00.829 --rc geninfo_all_blocks=1 00:03:00.829 --rc geninfo_unexecuted_blocks=1 00:03:00.829 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:00.829 ' 00:03:00.829 13:11:08 setup.sh.hugepages -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:00.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.829 --rc genhtml_branch_coverage=1 00:03:00.829 --rc genhtml_function_coverage=1 00:03:00.829 --rc genhtml_legend=1 00:03:00.829 --rc geninfo_all_blocks=1 00:03:00.829 --rc geninfo_unexecuted_blocks=1 00:03:00.829 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:00.829 ' 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 38226440 kB' 'MemAvailable: 42747624 kB' 'Buffers: 13184 kB' 'Cached: 13130088 kB' 'SwapCached: 0 kB' 'Active: 9622772 kB' 'Inactive: 4106540 kB' 'Active(anon): 9113224 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589540 kB' 'Mapped: 206604 kB' 'Shmem: 8527184 kB' 'KReclaimable: 552520 kB' 'Slab: 1537836 kB' 'SReclaimable: 552520 kB' 'SUnreclaim: 985316 kB' 'KernelStack: 21936 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36433360 kB' 'Committed_AS: 10373352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217924 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.829 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.830 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:00.831 13:11:08 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:03:00.831 13:11:08 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:00.831 13:11:08 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:00.831 13:11:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:00.831 ************************************ 00:03:00.831 START TEST single_node_setup 00:03:00.831 ************************************ 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1125 -- # single_node_setup 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.831 13:11:08 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:04.132 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:04.132 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:04.132 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:04.132 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:04.132 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:04.132 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:04.132 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:04.132 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:04.132 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:04.132 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:04.132 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:04.132 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:04.132 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:04.132 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:04.132 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:04.132 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:06.043 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 40422424 kB' 'MemAvailable: 44943544 kB' 'Buffers: 13184 kB' 'Cached: 13130224 kB' 'SwapCached: 0 kB' 'Active: 9623648 kB' 'Inactive: 4106540 kB' 'Active(anon): 9114100 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590228 kB' 'Mapped: 206756 kB' 'Shmem: 8527320 kB' 'KReclaimable: 552456 kB' 'Slab: 1536164 kB' 'SReclaimable: 552456 kB' 'SUnreclaim: 983708 kB' 'KernelStack: 21968 kB' 'PageTables: 8680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10377624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218052 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.043 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:06.044 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 40423340 kB' 'MemAvailable: 44944460 kB' 'Buffers: 13184 kB' 'Cached: 13130224 kB' 'SwapCached: 0 kB' 'Active: 9623196 kB' 'Inactive: 4106540 kB' 'Active(anon): 9113648 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589752 kB' 'Mapped: 206756 kB' 'Shmem: 8527320 kB' 'KReclaimable: 552456 kB' 'Slab: 1536004 kB' 'SReclaimable: 552456 kB' 'SUnreclaim: 983548 kB' 'KernelStack: 21888 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10376140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217988 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.045 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.046 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 40421012 kB' 'MemAvailable: 44942132 kB' 'Buffers: 13184 kB' 'Cached: 13130240 kB' 'SwapCached: 0 kB' 'Active: 9623696 kB' 'Inactive: 4106540 kB' 'Active(anon): 9114148 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590184 kB' 'Mapped: 206676 kB' 'Shmem: 8527336 kB' 'KReclaimable: 552456 kB' 'Slab: 1536032 kB' 'SReclaimable: 552456 kB' 'SUnreclaim: 983576 kB' 'KernelStack: 21872 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10377664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218036 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.047 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:06.048 nr_hugepages=1024 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:06.048 resv_hugepages=0 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:06.048 surplus_hugepages=0 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:06.048 anon_hugepages=0 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.048 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 40420384 kB' 'MemAvailable: 44941504 kB' 'Buffers: 13184 kB' 'Cached: 13130268 kB' 'SwapCached: 0 kB' 'Active: 9623396 kB' 'Inactive: 4106540 kB' 'Active(anon): 9113848 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590340 kB' 'Mapped: 206676 kB' 'Shmem: 8527364 kB' 'KReclaimable: 552456 kB' 'Slab: 1536032 kB' 'SReclaimable: 552456 kB' 'SUnreclaim: 983576 kB' 'KernelStack: 21984 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10377688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218052 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.049 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 17988672 kB' 'MemUsed: 14596696 kB' 'SwapCached: 0 kB' 'Active: 6828556 kB' 'Inactive: 3786844 kB' 'Active(anon): 6658128 kB' 'Inactive(anon): 0 kB' 'Active(file): 170428 kB' 'Inactive(file): 3786844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10403528 kB' 'Mapped: 106692 kB' 'AnonPages: 215112 kB' 'Shmem: 6446256 kB' 'KernelStack: 12616 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 285844 kB' 'Slab: 726948 kB' 'SReclaimable: 285844 kB' 'SUnreclaim: 441104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.050 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.051 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:06.052 node0=1024 expecting 1024 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:06.052 00:03:06.052 real 0m5.191s 00:03:06.052 user 0m1.373s 00:03:06.052 sys 0m2.391s 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:06.052 13:11:13 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:03:06.052 ************************************ 00:03:06.052 END TEST single_node_setup 00:03:06.052 ************************************ 00:03:06.052 13:11:13 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:03:06.052 13:11:13 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:06.052 13:11:13 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:06.052 13:11:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:06.052 ************************************ 00:03:06.052 START TEST even_2G_alloc 00:03:06.052 ************************************ 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.052 13:11:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:09.347 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:09.347 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:09.347 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:09.347 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:09.348 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:09.348 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:09.348 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:09.348 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:09.348 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:09.348 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:09.348 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:09.348 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:09.348 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:09.348 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:09.348 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:09.348 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:09.348 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 40418000 kB' 'MemAvailable: 44939120 kB' 'Buffers: 13184 kB' 'Cached: 13130388 kB' 'SwapCached: 0 kB' 'Active: 9622684 kB' 'Inactive: 4106540 kB' 'Active(anon): 9113136 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 588908 kB' 'Mapped: 205584 kB' 'Shmem: 8527484 kB' 'KReclaimable: 552456 kB' 'Slab: 1536676 kB' 'SReclaimable: 552456 kB' 'SUnreclaim: 984220 kB' 'KernelStack: 21888 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10364316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218084 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.348 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 40419284 kB' 'MemAvailable: 44940404 kB' 'Buffers: 13184 kB' 'Cached: 13130388 kB' 'SwapCached: 0 kB' 'Active: 9623132 kB' 'Inactive: 4106540 kB' 'Active(anon): 9113584 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589396 kB' 'Mapped: 205584 kB' 'Shmem: 8527484 kB' 'KReclaimable: 552456 kB' 'Slab: 1536592 kB' 'SReclaimable: 552456 kB' 'SUnreclaim: 984136 kB' 'KernelStack: 21872 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10364332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218068 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.349 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.350 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 40420048 kB' 'MemAvailable: 44941168 kB' 'Buffers: 13184 kB' 'Cached: 13130408 kB' 'SwapCached: 0 kB' 'Active: 9622896 kB' 'Inactive: 4106540 kB' 'Active(anon): 9113348 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589128 kB' 'Mapped: 205552 kB' 'Shmem: 8527504 kB' 'KReclaimable: 552456 kB' 'Slab: 1536592 kB' 'SReclaimable: 552456 kB' 'SUnreclaim: 984136 kB' 'KernelStack: 21856 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10364356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218036 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.351 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.352 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:09.353 nr_hugepages=1024 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:09.353 resv_hugepages=0 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:09.353 surplus_hugepages=0 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:09.353 anon_hugepages=0 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.353 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 40420828 kB' 'MemAvailable: 44941948 kB' 'Buffers: 13184 kB' 'Cached: 13130408 kB' 'SwapCached: 0 kB' 'Active: 9622520 kB' 'Inactive: 4106540 kB' 'Active(anon): 9112972 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 588776 kB' 'Mapped: 205552 kB' 'Shmem: 8527504 kB' 'KReclaimable: 552456 kB' 'Slab: 1536592 kB' 'SReclaimable: 552456 kB' 'SUnreclaim: 984136 kB' 'KernelStack: 21856 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10364376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218036 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.354 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.355 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.356 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 19052388 kB' 'MemUsed: 13532980 kB' 'SwapCached: 0 kB' 'Active: 6830008 kB' 'Inactive: 3786844 kB' 'Active(anon): 6659580 kB' 'Inactive(anon): 0 kB' 'Active(file): 170428 kB' 'Inactive(file): 3786844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10403684 kB' 'Mapped: 105840 kB' 'AnonPages: 216452 kB' 'Shmem: 6446412 kB' 'KernelStack: 12616 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 285844 kB' 'Slab: 727612 kB' 'SReclaimable: 285844 kB' 'SUnreclaim: 441768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.357 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.358 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.359 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.360 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.361 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.361 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.361 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.361 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.361 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.361 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.361 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.361 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.361 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.361 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.361 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.361 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.361 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.361 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.361 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.361 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.361 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.361 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.362 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.363 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.363 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.363 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.363 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.363 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.363 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.363 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.363 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.363 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.363 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.363 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.363 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.363 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.363 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.363 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.363 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.363 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.363 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.363 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.363 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.364 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.364 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.364 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.364 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.364 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:09.364 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:09.364 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:09.364 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:09.364 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.364 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:09.364 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.364 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.364 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.366 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:09.366 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:09.366 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.366 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.366 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.366 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.366 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698448 kB' 'MemFree: 21369008 kB' 'MemUsed: 6329440 kB' 'SwapCached: 0 kB' 'Active: 2793920 kB' 'Inactive: 319696 kB' 'Active(anon): 2454800 kB' 'Inactive(anon): 0 kB' 'Active(file): 339120 kB' 'Inactive(file): 319696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2739952 kB' 'Mapped: 99712 kB' 'AnonPages: 373780 kB' 'Shmem: 2081136 kB' 'KernelStack: 9256 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 266612 kB' 'Slab: 808980 kB' 'SReclaimable: 266612 kB' 'SUnreclaim: 542368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:09.366 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.366 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.366 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.366 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.366 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.367 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.368 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.369 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.370 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.370 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.370 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.370 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.370 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.370 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.370 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.370 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.370 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.370 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.370 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.370 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.370 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.370 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.370 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.370 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.371 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.372 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:09.373 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:09.373 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:09.373 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:09.373 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:03:09.373 node0=512 expecting 512 00:03:09.373 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:09.373 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:09.373 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:09.373 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:09.373 node1=512 expecting 512 00:03:09.373 13:11:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:03:09.373 00:03:09.373 real 0m3.415s 00:03:09.373 user 0m1.267s 00:03:09.373 sys 0m2.187s 00:03:09.373 13:11:17 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:09.373 13:11:17 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:09.373 ************************************ 00:03:09.373 END TEST even_2G_alloc 00:03:09.373 ************************************ 00:03:09.637 13:11:17 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:03:09.637 13:11:17 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:09.637 13:11:17 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:09.637 13:11:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:09.637 ************************************ 00:03:09.637 START TEST odd_alloc 00:03:09.637 ************************************ 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.637 13:11:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:12.933 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:12.933 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:12.933 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:12.933 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:12.933 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:12.933 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:12.933 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:12.933 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:12.933 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:12.933 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:12.933 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:12.933 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:12.933 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:12.933 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:12.933 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:12.933 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:12.933 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 40425760 kB' 'MemAvailable: 44946816 kB' 'Buffers: 13184 kB' 'Cached: 13130556 kB' 'SwapCached: 0 kB' 'Active: 9624652 kB' 'Inactive: 4106540 kB' 'Active(anon): 9115104 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590832 kB' 'Mapped: 205676 kB' 'Shmem: 8527652 kB' 'KReclaimable: 552392 kB' 'Slab: 1536508 kB' 'SReclaimable: 552392 kB' 'SUnreclaim: 984116 kB' 'KernelStack: 21856 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480912 kB' 'Committed_AS: 10365180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218052 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.933 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 40425456 kB' 'MemAvailable: 44946512 kB' 'Buffers: 13184 kB' 'Cached: 13130560 kB' 'SwapCached: 0 kB' 'Active: 9625860 kB' 'Inactive: 4106540 kB' 'Active(anon): 9116312 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 591960 kB' 'Mapped: 205564 kB' 'Shmem: 8527656 kB' 'KReclaimable: 552392 kB' 'Slab: 1536460 kB' 'SReclaimable: 552392 kB' 'SUnreclaim: 984068 kB' 'KernelStack: 21856 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480912 kB' 'Committed_AS: 10377768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218020 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.934 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.935 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 40434388 kB' 'MemAvailable: 44955444 kB' 'Buffers: 13184 kB' 'Cached: 13130576 kB' 'SwapCached: 0 kB' 'Active: 9624768 kB' 'Inactive: 4106540 kB' 'Active(anon): 9115220 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590848 kB' 'Mapped: 205564 kB' 'Shmem: 8527672 kB' 'KReclaimable: 552392 kB' 'Slab: 1536444 kB' 'SReclaimable: 552392 kB' 'SUnreclaim: 984052 kB' 'KernelStack: 21840 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480912 kB' 'Committed_AS: 10364976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217988 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.936 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.937 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:03:12.938 nr_hugepages=1025 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:12.938 resv_hugepages=0 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:12.938 surplus_hugepages=0 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:12.938 anon_hugepages=0 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 40434096 kB' 'MemAvailable: 44955152 kB' 'Buffers: 13184 kB' 'Cached: 13130600 kB' 'SwapCached: 0 kB' 'Active: 9624292 kB' 'Inactive: 4106540 kB' 'Active(anon): 9114744 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590348 kB' 'Mapped: 205564 kB' 'Shmem: 8527696 kB' 'KReclaimable: 552392 kB' 'Slab: 1536444 kB' 'SReclaimable: 552392 kB' 'SUnreclaim: 984052 kB' 'KernelStack: 21824 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480912 kB' 'Committed_AS: 10365004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217988 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.938 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.939 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 19054376 kB' 'MemUsed: 13530992 kB' 'SwapCached: 0 kB' 'Active: 6830496 kB' 'Inactive: 3786844 kB' 'Active(anon): 6660068 kB' 'Inactive(anon): 0 kB' 'Active(file): 170428 kB' 'Inactive(file): 3786844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10403820 kB' 'Mapped: 105840 kB' 'AnonPages: 216796 kB' 'Shmem: 6446548 kB' 'KernelStack: 12600 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 285780 kB' 'Slab: 727312 kB' 'SReclaimable: 285780 kB' 'SUnreclaim: 441532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.940 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698448 kB' 'MemFree: 21377904 kB' 'MemUsed: 6320544 kB' 'SwapCached: 0 kB' 'Active: 2794064 kB' 'Inactive: 319696 kB' 'Active(anon): 2454944 kB' 'Inactive(anon): 0 kB' 'Active(file): 339120 kB' 'Inactive(file): 319696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2739996 kB' 'Mapped: 99724 kB' 'AnonPages: 373852 kB' 'Shmem: 2081180 kB' 'KernelStack: 9256 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 266612 kB' 'Slab: 809132 kB' 'SReclaimable: 266612 kB' 'SUnreclaim: 542520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.941 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:03:12.942 node0=513 expecting 513 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:12.942 node1=512 expecting 512 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:12.942 00:03:12.942 real 0m3.450s 00:03:12.942 user 0m1.289s 00:03:12.942 sys 0m2.203s 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:12.942 13:11:20 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:12.942 ************************************ 00:03:12.942 END TEST odd_alloc 00:03:12.942 ************************************ 00:03:12.943 13:11:20 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:03:12.943 13:11:20 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:12.943 13:11:20 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:12.943 13:11:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:13.202 ************************************ 00:03:13.202 START TEST custom_alloc 00:03:13.202 ************************************ 00:03:13.202 13:11:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:13.202 13:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:03:13.202 13:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:03:13.202 13:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:03:13.202 13:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:03:13.202 13:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:13.202 13:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:03:13.202 13:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:03:13.202 13:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:13.202 13:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:13.202 13:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:03:13.202 13:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:13.202 13:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:13.202 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.203 13:11:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:15.738 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:15.738 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:15.738 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:15.738 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:15.738 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:15.738 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:15.739 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:15.739 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.739 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:15.739 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:15.739 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:15.739 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:15.739 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:15.739 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:15.739 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:15.739 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.739 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.004 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 39407244 kB' 'MemAvailable: 43928300 kB' 'Buffers: 13184 kB' 'Cached: 13130732 kB' 'SwapCached: 0 kB' 'Active: 9623696 kB' 'Inactive: 4106540 kB' 'Active(anon): 9114148 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589620 kB' 'Mapped: 205600 kB' 'Shmem: 8527828 kB' 'KReclaimable: 552392 kB' 'Slab: 1535820 kB' 'SReclaimable: 552392 kB' 'SUnreclaim: 983428 kB' 'KernelStack: 21872 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957648 kB' 'Committed_AS: 10365996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218052 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.005 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 39408992 kB' 'MemAvailable: 43930048 kB' 'Buffers: 13184 kB' 'Cached: 13130732 kB' 'SwapCached: 0 kB' 'Active: 9623588 kB' 'Inactive: 4106540 kB' 'Active(anon): 9114040 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589484 kB' 'Mapped: 205572 kB' 'Shmem: 8527828 kB' 'KReclaimable: 552392 kB' 'Slab: 1535804 kB' 'SReclaimable: 552392 kB' 'SUnreclaim: 983412 kB' 'KernelStack: 21872 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957648 kB' 'Committed_AS: 10367104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218004 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.006 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.007 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 39407596 kB' 'MemAvailable: 43928652 kB' 'Buffers: 13184 kB' 'Cached: 13130748 kB' 'SwapCached: 0 kB' 'Active: 9628964 kB' 'Inactive: 4106540 kB' 'Active(anon): 9119416 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 595320 kB' 'Mapped: 206076 kB' 'Shmem: 8527844 kB' 'KReclaimable: 552392 kB' 'Slab: 1535804 kB' 'SReclaimable: 552392 kB' 'SUnreclaim: 983412 kB' 'KernelStack: 21888 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957648 kB' 'Committed_AS: 10372152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217988 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.008 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.009 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:03:16.010 nr_hugepages=1536 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:16.010 resv_hugepages=0 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:16.010 surplus_hugepages=0 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:16.010 anon_hugepages=0 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 39406588 kB' 'MemAvailable: 43927644 kB' 'Buffers: 13184 kB' 'Cached: 13130772 kB' 'SwapCached: 0 kB' 'Active: 9624500 kB' 'Inactive: 4106540 kB' 'Active(anon): 9114952 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590324 kB' 'Mapped: 206076 kB' 'Shmem: 8527868 kB' 'KReclaimable: 552392 kB' 'Slab: 1535804 kB' 'SReclaimable: 552392 kB' 'SUnreclaim: 983412 kB' 'KernelStack: 21840 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957648 kB' 'Committed_AS: 10367676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218004 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.010 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.011 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 19075544 kB' 'MemUsed: 13509824 kB' 'SwapCached: 0 kB' 'Active: 6827160 kB' 'Inactive: 3786844 kB' 'Active(anon): 6656732 kB' 'Inactive(anon): 0 kB' 'Active(file): 170428 kB' 'Inactive(file): 3786844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10403864 kB' 'Mapped: 105840 kB' 'AnonPages: 213224 kB' 'Shmem: 6446592 kB' 'KernelStack: 12568 kB' 'PageTables: 3932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 285780 kB' 'Slab: 726236 kB' 'SReclaimable: 285780 kB' 'SUnreclaim: 440456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.012 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698448 kB' 'MemFree: 20323484 kB' 'MemUsed: 7374964 kB' 'SwapCached: 0 kB' 'Active: 2802060 kB' 'Inactive: 319696 kB' 'Active(anon): 2462940 kB' 'Inactive(anon): 0 kB' 'Active(file): 339120 kB' 'Inactive(file): 319696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2740092 kB' 'Mapped: 100496 kB' 'AnonPages: 381804 kB' 'Shmem: 2081276 kB' 'KernelStack: 9272 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 266612 kB' 'Slab: 809568 kB' 'SReclaimable: 266612 kB' 'SUnreclaim: 542956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.013 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:03:16.014 node0=512 expecting 512 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:03:16.014 node1=1024 expecting 1024 00:03:16.014 13:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:16.014 00:03:16.014 real 0m3.003s 00:03:16.014 user 0m0.988s 00:03:16.014 sys 0m1.846s 00:03:16.014 13:11:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:16.014 13:11:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:16.014 ************************************ 00:03:16.014 END TEST custom_alloc 00:03:16.014 ************************************ 00:03:16.014 13:11:24 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:16.014 13:11:24 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:16.014 13:11:24 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:16.014 13:11:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:16.274 ************************************ 00:03:16.274 START TEST no_shrink_alloc 00:03:16.274 ************************************ 00:03:16.274 13:11:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:16.274 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:03:16.274 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:16.274 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:16.274 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:03:16.274 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:16.274 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:03:16.274 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:16.274 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:16.274 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:16.274 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:16.274 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:16.274 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:16.274 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:16.274 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:16.274 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:16.274 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:16.274 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:16.275 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:16.275 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:03:16.275 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:03:16.275 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:03:16.275 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:03:16.275 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.275 13:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:18.811 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.811 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.811 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.811 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.811 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.811 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.811 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.811 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.811 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.811 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.811 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.811 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.811 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.811 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.811 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.811 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.811 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:18.811 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:03:18.811 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:03:18.811 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:18.811 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:18.811 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:18.811 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:18.811 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:18.811 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.811 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:18.811 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.811 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:18.811 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:18.811 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.811 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.811 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.811 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.811 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.811 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.811 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.811 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 40472860 kB' 'MemAvailable: 44993916 kB' 'Buffers: 13184 kB' 'Cached: 13130900 kB' 'SwapCached: 0 kB' 'Active: 9625592 kB' 'Inactive: 4106540 kB' 'Active(anon): 9116044 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 591340 kB' 'Mapped: 206604 kB' 'Shmem: 8527996 kB' 'KReclaimable: 552392 kB' 'Slab: 1536732 kB' 'SReclaimable: 552392 kB' 'SUnreclaim: 984340 kB' 'KernelStack: 21968 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10401060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218100 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.812 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.075 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.076 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 40472608 kB' 'MemAvailable: 44993664 kB' 'Buffers: 13184 kB' 'Cached: 13130904 kB' 'SwapCached: 0 kB' 'Active: 9625288 kB' 'Inactive: 4106540 kB' 'Active(anon): 9115740 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 591048 kB' 'Mapped: 206600 kB' 'Shmem: 8528000 kB' 'KReclaimable: 552392 kB' 'Slab: 1536892 kB' 'SReclaimable: 552392 kB' 'SUnreclaim: 984500 kB' 'KernelStack: 21952 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10401080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218084 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.077 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 40472608 kB' 'MemAvailable: 44993664 kB' 'Buffers: 13184 kB' 'Cached: 13130920 kB' 'SwapCached: 0 kB' 'Active: 9625368 kB' 'Inactive: 4106540 kB' 'Active(anon): 9115820 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 591052 kB' 'Mapped: 206600 kB' 'Shmem: 8528016 kB' 'KReclaimable: 552392 kB' 'Slab: 1536892 kB' 'SReclaimable: 552392 kB' 'SUnreclaim: 984500 kB' 'KernelStack: 21952 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10401100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218100 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.078 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.079 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:19.080 nr_hugepages=1024 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:19.080 resv_hugepages=0 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:19.080 surplus_hugepages=0 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:19.080 anon_hugepages=0 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 40472608 kB' 'MemAvailable: 44993664 kB' 'Buffers: 13184 kB' 'Cached: 13130960 kB' 'SwapCached: 0 kB' 'Active: 9625044 kB' 'Inactive: 4106540 kB' 'Active(anon): 9115496 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590648 kB' 'Mapped: 206600 kB' 'Shmem: 8528056 kB' 'KReclaimable: 552392 kB' 'Slab: 1536892 kB' 'SReclaimable: 552392 kB' 'SUnreclaim: 984500 kB' 'KernelStack: 21936 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10401124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218116 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.080 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.081 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.082 13:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 18036292 kB' 'MemUsed: 14549076 kB' 'SwapCached: 0 kB' 'Active: 6826888 kB' 'Inactive: 3786844 kB' 'Active(anon): 6656460 kB' 'Inactive(anon): 0 kB' 'Active(file): 170428 kB' 'Inactive(file): 3786844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10403864 kB' 'Mapped: 106756 kB' 'AnonPages: 212992 kB' 'Shmem: 6446592 kB' 'KernelStack: 12600 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 285780 kB' 'Slab: 726968 kB' 'SReclaimable: 285780 kB' 'SUnreclaim: 441188 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.082 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:19.083 node0=1024 expecting 1024 00:03:19.083 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:19.084 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:03:19.084 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:03:19.084 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:03:19.084 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:03:19.084 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.084 13:11:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:22.472 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.472 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.472 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.472 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.472 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.472 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.472 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.472 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.472 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.472 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.472 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.472 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.472 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.472 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.472 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.472 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.472 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:22.472 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:22.472 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:03:22.472 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:03:22.472 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:22.472 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:22.472 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:22.472 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:22.472 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:22.472 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 40507032 kB' 'MemAvailable: 45028088 kB' 'Buffers: 13184 kB' 'Cached: 13131052 kB' 'SwapCached: 0 kB' 'Active: 9629412 kB' 'Inactive: 4106540 kB' 'Active(anon): 9119864 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 595496 kB' 'Mapped: 206612 kB' 'Shmem: 8528148 kB' 'KReclaimable: 552392 kB' 'Slab: 1536804 kB' 'SReclaimable: 552392 kB' 'SUnreclaim: 984412 kB' 'KernelStack: 21952 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10401724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218084 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.473 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.474 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 40506892 kB' 'MemAvailable: 45027948 kB' 'Buffers: 13184 kB' 'Cached: 13131056 kB' 'SwapCached: 0 kB' 'Active: 9628956 kB' 'Inactive: 4106540 kB' 'Active(anon): 9119408 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 595020 kB' 'Mapped: 206572 kB' 'Shmem: 8528152 kB' 'KReclaimable: 552392 kB' 'Slab: 1536816 kB' 'SReclaimable: 552392 kB' 'SUnreclaim: 984424 kB' 'KernelStack: 21936 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10401744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218068 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.475 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.476 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.477 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 40507316 kB' 'MemAvailable: 45028372 kB' 'Buffers: 13184 kB' 'Cached: 13131076 kB' 'SwapCached: 0 kB' 'Active: 9628704 kB' 'Inactive: 4106540 kB' 'Active(anon): 9119156 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594808 kB' 'Mapped: 206572 kB' 'Shmem: 8528172 kB' 'KReclaimable: 552392 kB' 'Slab: 1536808 kB' 'SReclaimable: 552392 kB' 'SUnreclaim: 984416 kB' 'KernelStack: 21920 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10401764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218052 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.478 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.479 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:22.480 nr_hugepages=1024 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:22.480 resv_hugepages=0 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:22.480 surplus_hugepages=0 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:22.480 anon_hugepages=0 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283816 kB' 'MemFree: 40507932 kB' 'MemAvailable: 45028988 kB' 'Buffers: 13184 kB' 'Cached: 13131096 kB' 'SwapCached: 0 kB' 'Active: 9628680 kB' 'Inactive: 4106540 kB' 'Active(anon): 9119132 kB' 'Inactive(anon): 0 kB' 'Active(file): 509548 kB' 'Inactive(file): 4106540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594708 kB' 'Mapped: 206572 kB' 'Shmem: 8528192 kB' 'KReclaimable: 552392 kB' 'Slab: 1536808 kB' 'SReclaimable: 552392 kB' 'SUnreclaim: 984416 kB' 'KernelStack: 21936 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10401788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218052 kB' 'VmallocChunk: 0 kB' 'Percpu: 101696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3265908 kB' 'DirectMap2M: 23683072 kB' 'DirectMap1G: 42991616 kB' 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.480 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.481 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.482 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 18065456 kB' 'MemUsed: 14519912 kB' 'SwapCached: 0 kB' 'Active: 6829264 kB' 'Inactive: 3786844 kB' 'Active(anon): 6658836 kB' 'Inactive(anon): 0 kB' 'Active(file): 170428 kB' 'Inactive(file): 3786844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10403868 kB' 'Mapped: 106756 kB' 'AnonPages: 215808 kB' 'Shmem: 6446596 kB' 'KernelStack: 12584 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 285780 kB' 'Slab: 727160 kB' 'SReclaimable: 285780 kB' 'SUnreclaim: 441380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.483 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.484 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.485 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.485 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.485 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.485 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.485 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.485 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.485 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.485 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.485 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.485 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:22.485 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:22.485 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:22.485 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:22.485 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:22.485 node0=1024 expecting 1024 00:03:22.485 13:11:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:22.485 00:03:22.485 real 0m6.251s 00:03:22.485 user 0m2.270s 00:03:22.485 sys 0m3.922s 00:03:22.485 13:11:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:22.485 13:11:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:22.485 ************************************ 00:03:22.485 END TEST no_shrink_alloc 00:03:22.485 ************************************ 00:03:22.485 13:11:30 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:03:22.485 13:11:30 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:22.485 13:11:30 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:22.485 13:11:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.485 13:11:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:22.485 13:11:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.485 13:11:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:22.485 13:11:30 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:22.485 13:11:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.485 13:11:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:22.485 13:11:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.485 13:11:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:22.485 13:11:30 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:22.485 13:11:30 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:22.485 00:03:22.485 real 0m21.964s 00:03:22.485 user 0m7.464s 00:03:22.485 sys 0m12.977s 00:03:22.485 13:11:30 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:22.485 13:11:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:22.485 ************************************ 00:03:22.485 END TEST hugepages 00:03:22.485 ************************************ 00:03:22.485 13:11:30 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:22.485 13:11:30 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:22.485 13:11:30 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:22.485 13:11:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:22.485 ************************************ 00:03:22.485 START TEST driver 00:03:22.485 ************************************ 00:03:22.485 13:11:30 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:22.744 * Looking for test storage... 00:03:22.744 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:22.744 13:11:30 setup.sh.driver -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:22.744 13:11:30 setup.sh.driver -- common/autotest_common.sh@1691 -- # lcov --version 00:03:22.744 13:11:30 setup.sh.driver -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:22.744 13:11:30 setup.sh.driver -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:03:22.744 13:11:30 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:03:22.745 13:11:30 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:22.745 13:11:30 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:03:22.745 13:11:30 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:03:22.745 13:11:30 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:22.745 13:11:30 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:22.745 13:11:30 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:03:22.745 13:11:30 setup.sh.driver -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:22.745 13:11:30 setup.sh.driver -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:22.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.745 --rc genhtml_branch_coverage=1 00:03:22.745 --rc genhtml_function_coverage=1 00:03:22.745 --rc genhtml_legend=1 00:03:22.745 --rc geninfo_all_blocks=1 00:03:22.745 --rc geninfo_unexecuted_blocks=1 00:03:22.745 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:22.745 ' 00:03:22.745 13:11:30 setup.sh.driver -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:22.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.745 --rc genhtml_branch_coverage=1 00:03:22.745 --rc genhtml_function_coverage=1 00:03:22.745 --rc genhtml_legend=1 00:03:22.745 --rc geninfo_all_blocks=1 00:03:22.745 --rc geninfo_unexecuted_blocks=1 00:03:22.745 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:22.745 ' 00:03:22.745 13:11:30 setup.sh.driver -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:22.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.745 --rc genhtml_branch_coverage=1 00:03:22.745 --rc genhtml_function_coverage=1 00:03:22.745 --rc genhtml_legend=1 00:03:22.745 --rc geninfo_all_blocks=1 00:03:22.745 --rc geninfo_unexecuted_blocks=1 00:03:22.745 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:22.745 ' 00:03:22.745 13:11:30 setup.sh.driver -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:22.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.745 --rc genhtml_branch_coverage=1 00:03:22.745 --rc genhtml_function_coverage=1 00:03:22.745 --rc genhtml_legend=1 00:03:22.745 --rc geninfo_all_blocks=1 00:03:22.745 --rc geninfo_unexecuted_blocks=1 00:03:22.745 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:22.745 ' 00:03:22.745 13:11:30 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:22.745 13:11:30 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:22.745 13:11:30 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.014 13:11:35 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:28.014 13:11:35 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:28.014 13:11:35 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:28.014 13:11:35 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:28.014 ************************************ 00:03:28.014 START TEST guess_driver 00:03:28.014 ************************************ 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:28.014 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:28.014 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:28.014 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:28.014 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:28.014 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:28.014 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:28.014 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:28.014 Looking for driver=vfio-pci 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.014 13:11:35 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.314 13:11:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.692 13:11:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.692 13:11:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.692 13:11:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.692 13:11:40 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:32.692 13:11:40 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:32.692 13:11:40 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.692 13:11:40 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.963 00:03:37.963 real 0m9.743s 00:03:37.963 user 0m2.514s 00:03:37.963 sys 0m4.970s 00:03:37.963 13:11:45 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:37.963 13:11:45 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:37.963 ************************************ 00:03:37.963 END TEST guess_driver 00:03:37.963 ************************************ 00:03:37.963 00:03:37.963 real 0m14.832s 00:03:37.963 user 0m3.986s 00:03:37.963 sys 0m7.842s 00:03:37.963 13:11:45 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:37.963 13:11:45 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:37.963 ************************************ 00:03:37.963 END TEST driver 00:03:37.963 ************************************ 00:03:37.963 13:11:45 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:37.963 13:11:45 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:37.963 13:11:45 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:37.963 13:11:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:37.963 ************************************ 00:03:37.963 START TEST devices 00:03:37.963 ************************************ 00:03:37.963 13:11:45 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:37.963 * Looking for test storage... 00:03:37.963 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:37.963 13:11:45 setup.sh.devices -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:37.963 13:11:45 setup.sh.devices -- common/autotest_common.sh@1691 -- # lcov --version 00:03:37.963 13:11:45 setup.sh.devices -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:37.963 13:11:45 setup.sh.devices -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:37.963 13:11:45 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:03:37.963 13:11:45 setup.sh.devices -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.963 13:11:45 setup.sh.devices -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:37.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.963 --rc genhtml_branch_coverage=1 00:03:37.963 --rc genhtml_function_coverage=1 00:03:37.963 --rc genhtml_legend=1 00:03:37.963 --rc geninfo_all_blocks=1 00:03:37.963 --rc geninfo_unexecuted_blocks=1 00:03:37.963 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:37.963 ' 00:03:37.963 13:11:45 setup.sh.devices -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:37.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.963 --rc genhtml_branch_coverage=1 00:03:37.963 --rc genhtml_function_coverage=1 00:03:37.963 --rc genhtml_legend=1 00:03:37.963 --rc geninfo_all_blocks=1 00:03:37.963 --rc geninfo_unexecuted_blocks=1 00:03:37.963 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:37.963 ' 00:03:37.963 13:11:45 setup.sh.devices -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:37.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.963 --rc genhtml_branch_coverage=1 00:03:37.963 --rc genhtml_function_coverage=1 00:03:37.963 --rc genhtml_legend=1 00:03:37.963 --rc geninfo_all_blocks=1 00:03:37.963 --rc geninfo_unexecuted_blocks=1 00:03:37.963 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:37.963 ' 00:03:37.963 13:11:45 setup.sh.devices -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:37.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.963 --rc genhtml_branch_coverage=1 00:03:37.963 --rc genhtml_function_coverage=1 00:03:37.963 --rc genhtml_legend=1 00:03:37.963 --rc geninfo_all_blocks=1 00:03:37.963 --rc geninfo_unexecuted_blocks=1 00:03:37.963 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:37.963 ' 00:03:37.964 13:11:45 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:37.964 13:11:45 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:37.964 13:11:45 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.964 13:11:45 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.258 13:11:49 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:41.258 13:11:49 setup.sh.devices -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:41.258 13:11:49 setup.sh.devices -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:41.258 13:11:49 setup.sh.devices -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:41.258 13:11:49 setup.sh.devices -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:41.258 13:11:49 setup.sh.devices -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:41.258 13:11:49 setup.sh.devices -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:41.258 13:11:49 setup.sh.devices -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:41.258 13:11:49 setup.sh.devices -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:41.258 13:11:49 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:41.258 13:11:49 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:41.258 13:11:49 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:41.258 13:11:49 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:41.258 13:11:49 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:41.258 13:11:49 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:41.258 13:11:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:41.258 13:11:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:41.258 13:11:49 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:41.258 13:11:49 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:41.258 13:11:49 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:41.258 13:11:49 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:03:41.258 13:11:49 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:41.258 No valid GPT data, bailing 00:03:41.258 13:11:49 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:41.258 13:11:49 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:03:41.258 13:11:49 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:03:41.258 13:11:49 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:41.258 13:11:49 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:41.258 13:11:49 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:41.258 13:11:49 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:03:41.258 13:11:49 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:41.258 13:11:49 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:41.258 13:11:49 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:41.258 13:11:49 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:41.258 13:11:49 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:41.258 13:11:49 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:41.258 13:11:49 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.258 13:11:49 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.258 13:11:49 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:41.258 ************************************ 00:03:41.258 START TEST nvme_mount 00:03:41.258 ************************************ 00:03:41.258 13:11:49 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:03:41.258 13:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:41.258 13:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:41.258 13:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.258 13:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:41.258 13:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:41.258 13:11:49 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:41.258 13:11:49 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:41.258 13:11:49 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:41.258 13:11:49 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:41.258 13:11:49 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:41.259 13:11:49 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:41.259 13:11:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:41.259 13:11:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:41.259 13:11:49 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:41.259 13:11:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:41.259 13:11:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:41.259 13:11:49 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:41.259 13:11:49 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:41.259 13:11:49 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:42.195 Creating new GPT entries in memory. 00:03:42.195 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:42.195 other utilities. 00:03:42.454 13:11:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:42.454 13:11:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:42.454 13:11:50 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:42.454 13:11:50 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:42.454 13:11:50 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:43.391 Creating new GPT entries in memory. 00:03:43.391 The operation has completed successfully. 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3809628 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.391 13:11:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:46.680 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:46.680 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:46.940 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:46.940 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:46.940 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:46.940 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:46.940 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:46.940 13:11:54 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:46.940 13:11:54 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.940 13:11:54 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:46.940 13:11:54 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:46.940 13:11:54 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.940 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.940 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:46.940 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:46.940 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.940 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.940 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:46.940 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:46.940 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:46.940 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:46.940 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.940 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:46.940 13:11:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:46.940 13:11:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.940 13:11:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:50.229 13:11:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.229 13:11:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:50.229 13:11:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:50.229 13:11:58 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.229 13:11:58 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:50.229 13:11:58 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.229 13:11:58 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.229 13:11:58 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:03:50.229 13:11:58 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:50.229 13:11:58 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:50.229 13:11:58 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:50.229 13:11:58 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:50.229 13:11:58 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:50.229 13:11:58 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:50.229 13:11:58 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:50.229 13:11:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:50.229 13:11:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:50.229 13:11:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.229 13:11:58 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.229 13:11:58 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.514 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.515 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.515 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.515 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.515 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:53.515 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:53.515 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.515 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:53.515 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:53.515 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:53.515 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:53.515 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.515 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:53.515 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:53.515 13:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:53.515 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:53.515 00:03:53.515 real 0m12.295s 00:03:53.515 user 0m3.488s 00:03:53.515 sys 0m6.708s 00:03:53.515 13:12:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:53.515 13:12:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:53.515 ************************************ 00:03:53.515 END TEST nvme_mount 00:03:53.515 ************************************ 00:03:53.515 13:12:01 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:53.515 13:12:01 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:53.515 13:12:01 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:53.515 13:12:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:53.774 ************************************ 00:03:53.774 START TEST dm_mount 00:03:53.774 ************************************ 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:53.774 13:12:01 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:54.710 Creating new GPT entries in memory. 00:03:54.710 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:54.710 other utilities. 00:03:54.710 13:12:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:54.710 13:12:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:54.710 13:12:02 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:54.710 13:12:02 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:54.710 13:12:02 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:55.645 Creating new GPT entries in memory. 00:03:55.645 The operation has completed successfully. 00:03:55.645 13:12:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:55.645 13:12:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:55.646 13:12:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:55.646 13:12:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:55.646 13:12:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:57.023 The operation has completed successfully. 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3814158 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.023 13:12:04 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:59.556 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.815 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:59.815 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:59.815 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:59.815 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:59.815 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:59.815 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:59.815 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:59.815 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:59.815 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:59.815 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:59.815 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:59.815 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:59.815 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:59.815 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:59.815 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.815 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:59.815 13:12:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:59.815 13:12:07 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.815 13:12:07 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:03.108 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:03.108 00:04:03.108 real 0m9.206s 00:04:03.108 user 0m1.986s 00:04:03.108 sys 0m4.192s 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.108 13:12:10 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:03.108 ************************************ 00:04:03.108 END TEST dm_mount 00:04:03.108 ************************************ 00:04:03.108 13:12:10 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:03.109 13:12:10 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:03.109 13:12:10 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:03.109 13:12:10 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:03.109 13:12:10 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:03.109 13:12:10 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:03.109 13:12:10 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:03.109 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:03.109 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:03.109 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:03.109 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:03.109 13:12:11 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:03.109 13:12:11 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:03.109 13:12:11 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:03.109 13:12:11 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:03.109 13:12:11 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:03.109 13:12:11 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:03.109 13:12:11 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:03.109 00:04:03.109 real 0m25.782s 00:04:03.109 user 0m6.997s 00:04:03.109 sys 0m13.570s 00:04:03.109 13:12:11 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.109 13:12:11 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:03.109 ************************************ 00:04:03.109 END TEST devices 00:04:03.109 ************************************ 00:04:03.368 00:04:03.369 real 1m27.085s 00:04:03.369 user 0m26.325s 00:04:03.369 sys 0m49.290s 00:04:03.369 13:12:11 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.369 13:12:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:03.369 ************************************ 00:04:03.369 END TEST setup.sh 00:04:03.369 ************************************ 00:04:03.369 13:12:11 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:06.659 Hugepages 00:04:06.659 node hugesize free / total 00:04:06.659 node0 1048576kB 0 / 0 00:04:06.659 node0 2048kB 1024 / 1024 00:04:06.659 node1 1048576kB 0 / 0 00:04:06.659 node1 2048kB 1024 / 1024 00:04:06.659 00:04:06.659 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:06.659 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:06.659 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:06.660 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:06.660 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:06.660 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:06.660 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:06.660 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:06.660 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:06.660 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:06.660 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:06.660 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:06.660 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:06.660 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:06.660 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:06.660 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:06.660 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:06.660 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:06.660 13:12:14 -- spdk/autotest.sh@117 -- # uname -s 00:04:06.660 13:12:14 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:06.660 13:12:14 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:06.660 13:12:14 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:09.952 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:09.952 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:09.952 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:09.952 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:09.952 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:09.952 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:09.952 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:09.952 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:09.952 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:09.952 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:09.952 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:09.952 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:09.952 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:09.952 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:09.952 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:09.952 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:11.334 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:11.334 13:12:19 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:12.273 13:12:20 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:12.273 13:12:20 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:12.273 13:12:20 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:12.273 13:12:20 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:12.273 13:12:20 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:12.273 13:12:20 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:12.273 13:12:20 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:12.273 13:12:20 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:12.273 13:12:20 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:12.532 13:12:20 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:12.532 13:12:20 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:d8:00.0 00:04:12.532 13:12:20 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.823 Waiting for block devices as requested 00:04:15.823 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:15.823 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:15.823 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:15.823 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:15.823 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:16.083 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:16.083 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:16.083 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:16.343 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:16.343 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:16.343 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:16.602 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:16.602 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:16.602 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:16.865 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:16.865 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:16.865 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:17.289 13:12:25 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:17.289 13:12:25 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:17.289 13:12:25 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:17.289 13:12:25 -- common/autotest_common.sh@1485 -- # grep 0000:d8:00.0/nvme/nvme 00:04:17.289 13:12:25 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:17.289 13:12:25 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:17.289 13:12:25 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:17.289 13:12:25 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:17.289 13:12:25 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:17.289 13:12:25 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:17.289 13:12:25 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:17.289 13:12:25 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:17.289 13:12:25 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:17.289 13:12:25 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:04:17.290 13:12:25 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:17.290 13:12:25 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:17.290 13:12:25 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:17.290 13:12:25 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:17.290 13:12:25 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:17.290 13:12:25 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:17.290 13:12:25 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:17.290 13:12:25 -- common/autotest_common.sh@1541 -- # continue 00:04:17.290 13:12:25 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:17.290 13:12:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:17.290 13:12:25 -- common/autotest_common.sh@10 -- # set +x 00:04:17.290 13:12:25 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:17.290 13:12:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:17.290 13:12:25 -- common/autotest_common.sh@10 -- # set +x 00:04:17.290 13:12:25 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:20.627 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:20.627 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:20.627 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:20.627 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:20.627 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:20.627 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:20.627 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:20.627 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:20.627 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:20.627 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:20.627 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:20.627 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:20.627 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:20.627 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:20.627 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:20.627 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:22.007 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:22.008 13:12:29 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:22.008 13:12:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:22.008 13:12:29 -- common/autotest_common.sh@10 -- # set +x 00:04:22.008 13:12:29 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:22.008 13:12:29 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:22.008 13:12:29 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:22.008 13:12:29 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:22.008 13:12:29 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:22.008 13:12:29 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:22.008 13:12:29 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:22.008 13:12:29 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:22.008 13:12:29 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:22.008 13:12:29 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:22.008 13:12:29 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:22.008 13:12:29 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:22.008 13:12:29 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:22.008 13:12:30 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:22.008 13:12:30 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:d8:00.0 00:04:22.008 13:12:30 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:22.008 13:12:30 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:22.008 13:12:30 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:22.008 13:12:30 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:22.008 13:12:30 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:22.008 13:12:30 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:22.008 13:12:30 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:d8:00.0 00:04:22.008 13:12:30 -- common/autotest_common.sh@1577 -- # [[ -z 0000:d8:00.0 ]] 00:04:22.008 13:12:30 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3824125 00:04:22.008 13:12:30 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.008 13:12:30 -- common/autotest_common.sh@1583 -- # waitforlisten 3824125 00:04:22.008 13:12:30 -- common/autotest_common.sh@831 -- # '[' -z 3824125 ']' 00:04:22.008 13:12:30 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.008 13:12:30 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:22.008 13:12:30 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.008 13:12:30 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:22.008 13:12:30 -- common/autotest_common.sh@10 -- # set +x 00:04:22.008 [2024-10-17 13:12:30.046611] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:04:22.008 [2024-10-17 13:12:30.046696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3824125 ] 00:04:22.267 [2024-10-17 13:12:30.117555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.267 [2024-10-17 13:12:30.160317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.527 13:12:30 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:22.527 13:12:30 -- common/autotest_common.sh@864 -- # return 0 00:04:22.527 13:12:30 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:22.527 13:12:30 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:22.527 13:12:30 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:25.817 nvme0n1 00:04:25.817 13:12:33 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:25.817 [2024-10-17 13:12:33.552313] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:25.817 request: 00:04:25.817 { 00:04:25.817 "nvme_ctrlr_name": "nvme0", 00:04:25.817 "password": "test", 00:04:25.817 "method": "bdev_nvme_opal_revert", 00:04:25.817 "req_id": 1 00:04:25.817 } 00:04:25.817 Got JSON-RPC error response 00:04:25.817 response: 00:04:25.817 { 00:04:25.817 "code": -32602, 00:04:25.817 "message": "Invalid parameters" 00:04:25.817 } 00:04:25.817 13:12:33 -- common/autotest_common.sh@1589 -- # true 00:04:25.817 13:12:33 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:25.817 13:12:33 -- common/autotest_common.sh@1593 -- # killprocess 3824125 00:04:25.817 13:12:33 -- common/autotest_common.sh@950 -- # '[' -z 3824125 ']' 00:04:25.817 13:12:33 -- common/autotest_common.sh@954 -- # kill -0 3824125 00:04:25.817 13:12:33 -- common/autotest_common.sh@955 -- # uname 00:04:25.817 13:12:33 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:25.817 13:12:33 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3824125 00:04:25.817 13:12:33 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:25.817 13:12:33 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:25.817 13:12:33 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3824125' 00:04:25.817 killing process with pid 3824125 00:04:25.817 13:12:33 -- common/autotest_common.sh@969 -- # kill 3824125 00:04:25.817 13:12:33 -- common/autotest_common.sh@974 -- # wait 3824125 00:04:27.723 13:12:35 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:27.723 13:12:35 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:27.723 13:12:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:27.723 13:12:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:27.723 13:12:35 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:27.723 13:12:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:27.723 13:12:35 -- common/autotest_common.sh@10 -- # set +x 00:04:27.723 13:12:35 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:27.723 13:12:35 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:27.723 13:12:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.723 13:12:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.723 13:12:35 -- common/autotest_common.sh@10 -- # set +x 00:04:27.983 ************************************ 00:04:27.983 START TEST env 00:04:27.983 ************************************ 00:04:27.983 13:12:35 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:27.983 * Looking for test storage... 00:04:27.983 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:04:27.983 13:12:35 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:27.983 13:12:35 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:27.983 13:12:35 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:27.983 13:12:35 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:27.983 13:12:35 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.983 13:12:35 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.983 13:12:35 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.983 13:12:35 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.983 13:12:35 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.983 13:12:35 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.983 13:12:35 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.983 13:12:35 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.983 13:12:35 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.983 13:12:35 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.983 13:12:35 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.983 13:12:35 env -- scripts/common.sh@344 -- # case "$op" in 00:04:27.983 13:12:35 env -- scripts/common.sh@345 -- # : 1 00:04:27.983 13:12:35 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.983 13:12:35 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.983 13:12:35 env -- scripts/common.sh@365 -- # decimal 1 00:04:27.983 13:12:35 env -- scripts/common.sh@353 -- # local d=1 00:04:27.983 13:12:35 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.983 13:12:35 env -- scripts/common.sh@355 -- # echo 1 00:04:27.983 13:12:35 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.983 13:12:35 env -- scripts/common.sh@366 -- # decimal 2 00:04:27.983 13:12:35 env -- scripts/common.sh@353 -- # local d=2 00:04:27.983 13:12:35 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.983 13:12:35 env -- scripts/common.sh@355 -- # echo 2 00:04:27.983 13:12:35 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.983 13:12:35 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.983 13:12:35 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.983 13:12:35 env -- scripts/common.sh@368 -- # return 0 00:04:27.983 13:12:35 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.983 13:12:35 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:27.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.983 --rc genhtml_branch_coverage=1 00:04:27.983 --rc genhtml_function_coverage=1 00:04:27.983 --rc genhtml_legend=1 00:04:27.983 --rc geninfo_all_blocks=1 00:04:27.983 --rc geninfo_unexecuted_blocks=1 00:04:27.983 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:27.983 ' 00:04:27.983 13:12:35 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:27.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.983 --rc genhtml_branch_coverage=1 00:04:27.983 --rc genhtml_function_coverage=1 00:04:27.983 --rc genhtml_legend=1 00:04:27.983 --rc geninfo_all_blocks=1 00:04:27.983 --rc geninfo_unexecuted_blocks=1 00:04:27.983 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:27.983 ' 00:04:27.983 13:12:35 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:27.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.983 --rc genhtml_branch_coverage=1 00:04:27.983 --rc genhtml_function_coverage=1 00:04:27.983 --rc genhtml_legend=1 00:04:27.983 --rc geninfo_all_blocks=1 00:04:27.983 --rc geninfo_unexecuted_blocks=1 00:04:27.983 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:27.983 ' 00:04:27.983 13:12:35 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:27.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.984 --rc genhtml_branch_coverage=1 00:04:27.984 --rc genhtml_function_coverage=1 00:04:27.984 --rc genhtml_legend=1 00:04:27.984 --rc geninfo_all_blocks=1 00:04:27.984 --rc geninfo_unexecuted_blocks=1 00:04:27.984 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:27.984 ' 00:04:27.984 13:12:35 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:27.984 13:12:35 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.984 13:12:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.984 13:12:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.984 ************************************ 00:04:27.984 START TEST env_memory 00:04:27.984 ************************************ 00:04:27.984 13:12:36 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:27.984 00:04:27.984 00:04:27.984 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.984 http://cunit.sourceforge.net/ 00:04:27.984 00:04:27.984 00:04:27.984 Suite: memory 00:04:28.243 Test: alloc and free memory map ...[2024-10-17 13:12:36.043961] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:28.243 passed 00:04:28.243 Test: mem map translation ...[2024-10-17 13:12:36.057985] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:28.243 [2024-10-17 13:12:36.058004] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:28.243 [2024-10-17 13:12:36.058038] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:28.243 [2024-10-17 13:12:36.058048] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:28.243 passed 00:04:28.244 Test: mem map registration ...[2024-10-17 13:12:36.078386] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:28.244 [2024-10-17 13:12:36.078403] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:28.244 passed 00:04:28.244 Test: mem map adjacent registrations ...passed 00:04:28.244 00:04:28.244 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.244 suites 1 1 n/a 0 0 00:04:28.244 tests 4 4 4 0 0 00:04:28.244 asserts 152 152 152 0 n/a 00:04:28.244 00:04:28.244 Elapsed time = 0.087 seconds 00:04:28.244 00:04:28.244 real 0m0.100s 00:04:28.244 user 0m0.090s 00:04:28.244 sys 0m0.010s 00:04:28.244 13:12:36 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.244 13:12:36 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:28.244 ************************************ 00:04:28.244 END TEST env_memory 00:04:28.244 ************************************ 00:04:28.244 13:12:36 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:28.244 13:12:36 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.244 13:12:36 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.244 13:12:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.244 ************************************ 00:04:28.244 START TEST env_vtophys 00:04:28.244 ************************************ 00:04:28.244 13:12:36 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:28.244 EAL: lib.eal log level changed from notice to debug 00:04:28.244 EAL: Detected lcore 0 as core 0 on socket 0 00:04:28.244 EAL: Detected lcore 1 as core 1 on socket 0 00:04:28.244 EAL: Detected lcore 2 as core 2 on socket 0 00:04:28.244 EAL: Detected lcore 3 as core 3 on socket 0 00:04:28.244 EAL: Detected lcore 4 as core 4 on socket 0 00:04:28.244 EAL: Detected lcore 5 as core 5 on socket 0 00:04:28.244 EAL: Detected lcore 6 as core 6 on socket 0 00:04:28.244 EAL: Detected lcore 7 as core 8 on socket 0 00:04:28.244 EAL: Detected lcore 8 as core 9 on socket 0 00:04:28.244 EAL: Detected lcore 9 as core 10 on socket 0 00:04:28.244 EAL: Detected lcore 10 as core 11 on socket 0 00:04:28.244 EAL: Detected lcore 11 as core 12 on socket 0 00:04:28.244 EAL: Detected lcore 12 as core 13 on socket 0 00:04:28.244 EAL: Detected lcore 13 as core 14 on socket 0 00:04:28.244 EAL: Detected lcore 14 as core 16 on socket 0 00:04:28.244 EAL: Detected lcore 15 as core 17 on socket 0 00:04:28.244 EAL: Detected lcore 16 as core 18 on socket 0 00:04:28.244 EAL: Detected lcore 17 as core 19 on socket 0 00:04:28.244 EAL: Detected lcore 18 as core 20 on socket 0 00:04:28.244 EAL: Detected lcore 19 as core 21 on socket 0 00:04:28.244 EAL: Detected lcore 20 as core 22 on socket 0 00:04:28.244 EAL: Detected lcore 21 as core 24 on socket 0 00:04:28.244 EAL: Detected lcore 22 as core 25 on socket 0 00:04:28.244 EAL: Detected lcore 23 as core 26 on socket 0 00:04:28.244 EAL: Detected lcore 24 as core 27 on socket 0 00:04:28.244 EAL: Detected lcore 25 as core 28 on socket 0 00:04:28.244 EAL: Detected lcore 26 as core 29 on socket 0 00:04:28.244 EAL: Detected lcore 27 as core 30 on socket 0 00:04:28.244 EAL: Detected lcore 28 as core 0 on socket 1 00:04:28.244 EAL: Detected lcore 29 as core 1 on socket 1 00:04:28.244 EAL: Detected lcore 30 as core 2 on socket 1 00:04:28.244 EAL: Detected lcore 31 as core 3 on socket 1 00:04:28.244 EAL: Detected lcore 32 as core 4 on socket 1 00:04:28.244 EAL: Detected lcore 33 as core 5 on socket 1 00:04:28.244 EAL: Detected lcore 34 as core 6 on socket 1 00:04:28.244 EAL: Detected lcore 35 as core 8 on socket 1 00:04:28.244 EAL: Detected lcore 36 as core 9 on socket 1 00:04:28.244 EAL: Detected lcore 37 as core 10 on socket 1 00:04:28.244 EAL: Detected lcore 38 as core 11 on socket 1 00:04:28.244 EAL: Detected lcore 39 as core 12 on socket 1 00:04:28.244 EAL: Detected lcore 40 as core 13 on socket 1 00:04:28.244 EAL: Detected lcore 41 as core 14 on socket 1 00:04:28.244 EAL: Detected lcore 42 as core 16 on socket 1 00:04:28.244 EAL: Detected lcore 43 as core 17 on socket 1 00:04:28.244 EAL: Detected lcore 44 as core 18 on socket 1 00:04:28.244 EAL: Detected lcore 45 as core 19 on socket 1 00:04:28.244 EAL: Detected lcore 46 as core 20 on socket 1 00:04:28.244 EAL: Detected lcore 47 as core 21 on socket 1 00:04:28.244 EAL: Detected lcore 48 as core 22 on socket 1 00:04:28.244 EAL: Detected lcore 49 as core 24 on socket 1 00:04:28.244 EAL: Detected lcore 50 as core 25 on socket 1 00:04:28.244 EAL: Detected lcore 51 as core 26 on socket 1 00:04:28.244 EAL: Detected lcore 52 as core 27 on socket 1 00:04:28.244 EAL: Detected lcore 53 as core 28 on socket 1 00:04:28.244 EAL: Detected lcore 54 as core 29 on socket 1 00:04:28.244 EAL: Detected lcore 55 as core 30 on socket 1 00:04:28.244 EAL: Detected lcore 56 as core 0 on socket 0 00:04:28.244 EAL: Detected lcore 57 as core 1 on socket 0 00:04:28.244 EAL: Detected lcore 58 as core 2 on socket 0 00:04:28.244 EAL: Detected lcore 59 as core 3 on socket 0 00:04:28.244 EAL: Detected lcore 60 as core 4 on socket 0 00:04:28.244 EAL: Detected lcore 61 as core 5 on socket 0 00:04:28.244 EAL: Detected lcore 62 as core 6 on socket 0 00:04:28.244 EAL: Detected lcore 63 as core 8 on socket 0 00:04:28.244 EAL: Detected lcore 64 as core 9 on socket 0 00:04:28.244 EAL: Detected lcore 65 as core 10 on socket 0 00:04:28.244 EAL: Detected lcore 66 as core 11 on socket 0 00:04:28.244 EAL: Detected lcore 67 as core 12 on socket 0 00:04:28.244 EAL: Detected lcore 68 as core 13 on socket 0 00:04:28.244 EAL: Detected lcore 69 as core 14 on socket 0 00:04:28.244 EAL: Detected lcore 70 as core 16 on socket 0 00:04:28.244 EAL: Detected lcore 71 as core 17 on socket 0 00:04:28.244 EAL: Detected lcore 72 as core 18 on socket 0 00:04:28.244 EAL: Detected lcore 73 as core 19 on socket 0 00:04:28.244 EAL: Detected lcore 74 as core 20 on socket 0 00:04:28.244 EAL: Detected lcore 75 as core 21 on socket 0 00:04:28.244 EAL: Detected lcore 76 as core 22 on socket 0 00:04:28.244 EAL: Detected lcore 77 as core 24 on socket 0 00:04:28.244 EAL: Detected lcore 78 as core 25 on socket 0 00:04:28.244 EAL: Detected lcore 79 as core 26 on socket 0 00:04:28.244 EAL: Detected lcore 80 as core 27 on socket 0 00:04:28.244 EAL: Detected lcore 81 as core 28 on socket 0 00:04:28.244 EAL: Detected lcore 82 as core 29 on socket 0 00:04:28.244 EAL: Detected lcore 83 as core 30 on socket 0 00:04:28.244 EAL: Detected lcore 84 as core 0 on socket 1 00:04:28.244 EAL: Detected lcore 85 as core 1 on socket 1 00:04:28.244 EAL: Detected lcore 86 as core 2 on socket 1 00:04:28.244 EAL: Detected lcore 87 as core 3 on socket 1 00:04:28.244 EAL: Detected lcore 88 as core 4 on socket 1 00:04:28.244 EAL: Detected lcore 89 as core 5 on socket 1 00:04:28.244 EAL: Detected lcore 90 as core 6 on socket 1 00:04:28.244 EAL: Detected lcore 91 as core 8 on socket 1 00:04:28.244 EAL: Detected lcore 92 as core 9 on socket 1 00:04:28.244 EAL: Detected lcore 93 as core 10 on socket 1 00:04:28.244 EAL: Detected lcore 94 as core 11 on socket 1 00:04:28.244 EAL: Detected lcore 95 as core 12 on socket 1 00:04:28.244 EAL: Detected lcore 96 as core 13 on socket 1 00:04:28.244 EAL: Detected lcore 97 as core 14 on socket 1 00:04:28.244 EAL: Detected lcore 98 as core 16 on socket 1 00:04:28.244 EAL: Detected lcore 99 as core 17 on socket 1 00:04:28.244 EAL: Detected lcore 100 as core 18 on socket 1 00:04:28.244 EAL: Detected lcore 101 as core 19 on socket 1 00:04:28.244 EAL: Detected lcore 102 as core 20 on socket 1 00:04:28.244 EAL: Detected lcore 103 as core 21 on socket 1 00:04:28.244 EAL: Detected lcore 104 as core 22 on socket 1 00:04:28.244 EAL: Detected lcore 105 as core 24 on socket 1 00:04:28.244 EAL: Detected lcore 106 as core 25 on socket 1 00:04:28.244 EAL: Detected lcore 107 as core 26 on socket 1 00:04:28.244 EAL: Detected lcore 108 as core 27 on socket 1 00:04:28.244 EAL: Detected lcore 109 as core 28 on socket 1 00:04:28.244 EAL: Detected lcore 110 as core 29 on socket 1 00:04:28.244 EAL: Detected lcore 111 as core 30 on socket 1 00:04:28.244 EAL: Maximum logical cores by configuration: 128 00:04:28.244 EAL: Detected CPU lcores: 112 00:04:28.244 EAL: Detected NUMA nodes: 2 00:04:28.244 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:28.244 EAL: Checking presence of .so 'librte_eal.so.24' 00:04:28.244 EAL: Checking presence of .so 'librte_eal.so' 00:04:28.244 EAL: Detected static linkage of DPDK 00:04:28.244 EAL: No shared files mode enabled, IPC will be disabled 00:04:28.244 EAL: Bus pci wants IOVA as 'DC' 00:04:28.244 EAL: Buses did not request a specific IOVA mode. 00:04:28.244 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:28.244 EAL: Selected IOVA mode 'VA' 00:04:28.244 EAL: Probing VFIO support... 00:04:28.244 EAL: IOMMU type 1 (Type 1) is supported 00:04:28.244 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:28.244 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:28.244 EAL: VFIO support initialized 00:04:28.244 EAL: Ask a virtual area of 0x2e000 bytes 00:04:28.244 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:28.244 EAL: Setting up physically contiguous memory... 00:04:28.244 EAL: Setting maximum number of open files to 524288 00:04:28.244 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:28.244 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:28.244 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:28.244 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.244 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:28.244 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.244 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.244 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:28.244 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:28.244 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.244 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:28.244 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.244 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.244 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:28.244 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:28.244 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.244 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:28.244 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.244 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.244 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:28.244 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:28.244 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.244 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:28.244 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.244 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.244 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:28.244 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:28.244 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:28.244 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.245 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:28.245 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:28.245 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.245 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:28.245 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:28.245 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.245 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:28.245 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:28.245 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.245 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:28.245 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:28.245 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.245 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:28.245 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:28.245 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.245 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:28.245 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:28.245 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.245 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:28.245 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:28.245 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.245 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:28.245 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:28.245 EAL: Hugepages will be freed exactly as allocated. 00:04:28.245 EAL: No shared files mode enabled, IPC is disabled 00:04:28.245 EAL: No shared files mode enabled, IPC is disabled 00:04:28.245 EAL: TSC frequency is ~2500000 KHz 00:04:28.245 EAL: Main lcore 0 is ready (tid=7f38c63b6a00;cpuset=[0]) 00:04:28.245 EAL: Trying to obtain current memory policy. 00:04:28.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.245 EAL: Restoring previous memory policy: 0 00:04:28.245 EAL: request: mp_malloc_sync 00:04:28.245 EAL: No shared files mode enabled, IPC is disabled 00:04:28.245 EAL: Heap on socket 0 was expanded by 2MB 00:04:28.245 EAL: No shared files mode enabled, IPC is disabled 00:04:28.245 EAL: Mem event callback 'spdk:(nil)' registered 00:04:28.245 00:04:28.245 00:04:28.245 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.245 http://cunit.sourceforge.net/ 00:04:28.245 00:04:28.245 00:04:28.245 Suite: components_suite 00:04:28.245 Test: vtophys_malloc_test ...passed 00:04:28.245 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:28.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.245 EAL: Restoring previous memory policy: 4 00:04:28.245 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.245 EAL: request: mp_malloc_sync 00:04:28.245 EAL: No shared files mode enabled, IPC is disabled 00:04:28.245 EAL: Heap on socket 0 was expanded by 4MB 00:04:28.245 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.245 EAL: request: mp_malloc_sync 00:04:28.245 EAL: No shared files mode enabled, IPC is disabled 00:04:28.245 EAL: Heap on socket 0 was shrunk by 4MB 00:04:28.245 EAL: Trying to obtain current memory policy. 00:04:28.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.245 EAL: Restoring previous memory policy: 4 00:04:28.245 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.245 EAL: request: mp_malloc_sync 00:04:28.245 EAL: No shared files mode enabled, IPC is disabled 00:04:28.245 EAL: Heap on socket 0 was expanded by 6MB 00:04:28.245 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.245 EAL: request: mp_malloc_sync 00:04:28.245 EAL: No shared files mode enabled, IPC is disabled 00:04:28.245 EAL: Heap on socket 0 was shrunk by 6MB 00:04:28.245 EAL: Trying to obtain current memory policy. 00:04:28.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.245 EAL: Restoring previous memory policy: 4 00:04:28.245 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.245 EAL: request: mp_malloc_sync 00:04:28.245 EAL: No shared files mode enabled, IPC is disabled 00:04:28.245 EAL: Heap on socket 0 was expanded by 10MB 00:04:28.245 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.245 EAL: request: mp_malloc_sync 00:04:28.245 EAL: No shared files mode enabled, IPC is disabled 00:04:28.245 EAL: Heap on socket 0 was shrunk by 10MB 00:04:28.245 EAL: Trying to obtain current memory policy. 00:04:28.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.245 EAL: Restoring previous memory policy: 4 00:04:28.245 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.245 EAL: request: mp_malloc_sync 00:04:28.245 EAL: No shared files mode enabled, IPC is disabled 00:04:28.245 EAL: Heap on socket 0 was expanded by 18MB 00:04:28.245 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.245 EAL: request: mp_malloc_sync 00:04:28.245 EAL: No shared files mode enabled, IPC is disabled 00:04:28.245 EAL: Heap on socket 0 was shrunk by 18MB 00:04:28.245 EAL: Trying to obtain current memory policy. 00:04:28.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.245 EAL: Restoring previous memory policy: 4 00:04:28.245 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.245 EAL: request: mp_malloc_sync 00:04:28.245 EAL: No shared files mode enabled, IPC is disabled 00:04:28.245 EAL: Heap on socket 0 was expanded by 34MB 00:04:28.504 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.504 EAL: request: mp_malloc_sync 00:04:28.504 EAL: No shared files mode enabled, IPC is disabled 00:04:28.504 EAL: Heap on socket 0 was shrunk by 34MB 00:04:28.504 EAL: Trying to obtain current memory policy. 00:04:28.504 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.504 EAL: Restoring previous memory policy: 4 00:04:28.504 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.504 EAL: request: mp_malloc_sync 00:04:28.504 EAL: No shared files mode enabled, IPC is disabled 00:04:28.504 EAL: Heap on socket 0 was expanded by 66MB 00:04:28.504 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.504 EAL: request: mp_malloc_sync 00:04:28.504 EAL: No shared files mode enabled, IPC is disabled 00:04:28.504 EAL: Heap on socket 0 was shrunk by 66MB 00:04:28.504 EAL: Trying to obtain current memory policy. 00:04:28.504 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.504 EAL: Restoring previous memory policy: 4 00:04:28.504 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.504 EAL: request: mp_malloc_sync 00:04:28.504 EAL: No shared files mode enabled, IPC is disabled 00:04:28.504 EAL: Heap on socket 0 was expanded by 130MB 00:04:28.504 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.504 EAL: request: mp_malloc_sync 00:04:28.504 EAL: No shared files mode enabled, IPC is disabled 00:04:28.504 EAL: Heap on socket 0 was shrunk by 130MB 00:04:28.504 EAL: Trying to obtain current memory policy. 00:04:28.504 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.504 EAL: Restoring previous memory policy: 4 00:04:28.504 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.504 EAL: request: mp_malloc_sync 00:04:28.504 EAL: No shared files mode enabled, IPC is disabled 00:04:28.505 EAL: Heap on socket 0 was expanded by 258MB 00:04:28.505 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.505 EAL: request: mp_malloc_sync 00:04:28.505 EAL: No shared files mode enabled, IPC is disabled 00:04:28.505 EAL: Heap on socket 0 was shrunk by 258MB 00:04:28.505 EAL: Trying to obtain current memory policy. 00:04:28.505 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.764 EAL: Restoring previous memory policy: 4 00:04:28.764 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.764 EAL: request: mp_malloc_sync 00:04:28.764 EAL: No shared files mode enabled, IPC is disabled 00:04:28.764 EAL: Heap on socket 0 was expanded by 514MB 00:04:28.764 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.764 EAL: request: mp_malloc_sync 00:04:28.764 EAL: No shared files mode enabled, IPC is disabled 00:04:28.764 EAL: Heap on socket 0 was shrunk by 514MB 00:04:28.764 EAL: Trying to obtain current memory policy. 00:04:28.764 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.023 EAL: Restoring previous memory policy: 4 00:04:29.023 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.023 EAL: request: mp_malloc_sync 00:04:29.023 EAL: No shared files mode enabled, IPC is disabled 00:04:29.023 EAL: Heap on socket 0 was expanded by 1026MB 00:04:29.283 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.283 EAL: request: mp_malloc_sync 00:04:29.283 EAL: No shared files mode enabled, IPC is disabled 00:04:29.283 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:29.283 passed 00:04:29.283 00:04:29.283 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.283 suites 1 1 n/a 0 0 00:04:29.283 tests 2 2 2 0 0 00:04:29.283 asserts 497 497 497 0 n/a 00:04:29.283 00:04:29.283 Elapsed time = 0.960 seconds 00:04:29.283 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.283 EAL: request: mp_malloc_sync 00:04:29.283 EAL: No shared files mode enabled, IPC is disabled 00:04:29.283 EAL: Heap on socket 0 was shrunk by 2MB 00:04:29.283 EAL: No shared files mode enabled, IPC is disabled 00:04:29.283 EAL: No shared files mode enabled, IPC is disabled 00:04:29.283 EAL: No shared files mode enabled, IPC is disabled 00:04:29.283 00:04:29.283 real 0m1.087s 00:04:29.283 user 0m0.616s 00:04:29.283 sys 0m0.436s 00:04:29.283 13:12:37 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.283 13:12:37 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:29.283 ************************************ 00:04:29.283 END TEST env_vtophys 00:04:29.283 ************************************ 00:04:29.283 13:12:37 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:29.283 13:12:37 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.283 13:12:37 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.283 13:12:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.542 ************************************ 00:04:29.543 START TEST env_pci 00:04:29.543 ************************************ 00:04:29.543 13:12:37 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:29.543 00:04:29.543 00:04:29.543 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.543 http://cunit.sourceforge.net/ 00:04:29.543 00:04:29.543 00:04:29.543 Suite: pci 00:04:29.543 Test: pci_hook ...[2024-10-17 13:12:37.357783] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1050:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3825426 has claimed it 00:04:29.543 EAL: Cannot find device (10000:00:01.0) 00:04:29.543 EAL: Failed to attach device on primary process 00:04:29.543 passed 00:04:29.543 00:04:29.543 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.543 suites 1 1 n/a 0 0 00:04:29.543 tests 1 1 1 0 0 00:04:29.543 asserts 25 25 25 0 n/a 00:04:29.543 00:04:29.543 Elapsed time = 0.024 seconds 00:04:29.543 00:04:29.543 real 0m0.034s 00:04:29.543 user 0m0.009s 00:04:29.543 sys 0m0.026s 00:04:29.543 13:12:37 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.543 13:12:37 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:29.543 ************************************ 00:04:29.543 END TEST env_pci 00:04:29.543 ************************************ 00:04:29.543 13:12:37 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:29.543 13:12:37 env -- env/env.sh@15 -- # uname 00:04:29.543 13:12:37 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:29.543 13:12:37 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:29.543 13:12:37 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:29.543 13:12:37 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:29.543 13:12:37 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.543 13:12:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.543 ************************************ 00:04:29.543 START TEST env_dpdk_post_init 00:04:29.543 ************************************ 00:04:29.543 13:12:37 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:29.543 EAL: Detected CPU lcores: 112 00:04:29.543 EAL: Detected NUMA nodes: 2 00:04:29.543 EAL: Detected static linkage of DPDK 00:04:29.543 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:29.543 EAL: Selected IOVA mode 'VA' 00:04:29.543 EAL: VFIO support initialized 00:04:29.543 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:29.543 EAL: Using IOMMU type 1 (Type 1) 00:04:30.480 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:33.770 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:33.770 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:04:34.338 Starting DPDK initialization... 00:04:34.338 Starting SPDK post initialization... 00:04:34.338 SPDK NVMe probe 00:04:34.338 Attaching to 0000:d8:00.0 00:04:34.338 Attached to 0000:d8:00.0 00:04:34.338 Cleaning up... 00:04:34.338 00:04:34.338 real 0m4.711s 00:04:34.338 user 0m3.318s 00:04:34.338 sys 0m0.638s 00:04:34.338 13:12:42 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.338 13:12:42 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:34.338 ************************************ 00:04:34.338 END TEST env_dpdk_post_init 00:04:34.338 ************************************ 00:04:34.338 13:12:42 env -- env/env.sh@26 -- # uname 00:04:34.338 13:12:42 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:34.338 13:12:42 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:34.338 13:12:42 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.338 13:12:42 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.338 13:12:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.338 ************************************ 00:04:34.338 START TEST env_mem_callbacks 00:04:34.338 ************************************ 00:04:34.339 13:12:42 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:34.339 EAL: Detected CPU lcores: 112 00:04:34.339 EAL: Detected NUMA nodes: 2 00:04:34.339 EAL: Detected static linkage of DPDK 00:04:34.339 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:34.339 EAL: Selected IOVA mode 'VA' 00:04:34.339 EAL: VFIO support initialized 00:04:34.339 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:34.339 00:04:34.339 00:04:34.339 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.339 http://cunit.sourceforge.net/ 00:04:34.339 00:04:34.339 00:04:34.339 Suite: memory 00:04:34.339 Test: test ... 00:04:34.339 register 0x200000200000 2097152 00:04:34.339 malloc 3145728 00:04:34.339 register 0x200000400000 4194304 00:04:34.339 buf 0x200000500000 len 3145728 PASSED 00:04:34.339 malloc 64 00:04:34.339 buf 0x2000004fff40 len 64 PASSED 00:04:34.339 malloc 4194304 00:04:34.339 register 0x200000800000 6291456 00:04:34.339 buf 0x200000a00000 len 4194304 PASSED 00:04:34.339 free 0x200000500000 3145728 00:04:34.339 free 0x2000004fff40 64 00:04:34.339 unregister 0x200000400000 4194304 PASSED 00:04:34.339 free 0x200000a00000 4194304 00:04:34.339 unregister 0x200000800000 6291456 PASSED 00:04:34.339 malloc 8388608 00:04:34.339 register 0x200000400000 10485760 00:04:34.339 buf 0x200000600000 len 8388608 PASSED 00:04:34.339 free 0x200000600000 8388608 00:04:34.339 unregister 0x200000400000 10485760 PASSED 00:04:34.339 passed 00:04:34.339 00:04:34.339 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.339 suites 1 1 n/a 0 0 00:04:34.339 tests 1 1 1 0 0 00:04:34.339 asserts 15 15 15 0 n/a 00:04:34.339 00:04:34.339 Elapsed time = 0.006 seconds 00:04:34.339 00:04:34.339 real 0m0.071s 00:04:34.339 user 0m0.019s 00:04:34.339 sys 0m0.052s 00:04:34.339 13:12:42 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.339 13:12:42 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:34.339 ************************************ 00:04:34.339 END TEST env_mem_callbacks 00:04:34.339 ************************************ 00:04:34.339 00:04:34.339 real 0m6.609s 00:04:34.339 user 0m4.306s 00:04:34.339 sys 0m1.559s 00:04:34.339 13:12:42 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.339 13:12:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.339 ************************************ 00:04:34.339 END TEST env 00:04:34.339 ************************************ 00:04:34.597 13:12:42 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:34.597 13:12:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.597 13:12:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.597 13:12:42 -- common/autotest_common.sh@10 -- # set +x 00:04:34.597 ************************************ 00:04:34.597 START TEST rpc 00:04:34.597 ************************************ 00:04:34.597 13:12:42 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:34.597 * Looking for test storage... 00:04:34.597 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:34.597 13:12:42 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:34.598 13:12:42 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:34.598 13:12:42 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:34.857 13:12:42 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:34.857 13:12:42 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.857 13:12:42 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.857 13:12:42 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.857 13:12:42 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.857 13:12:42 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.857 13:12:42 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.857 13:12:42 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.857 13:12:42 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.857 13:12:42 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.857 13:12:42 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.857 13:12:42 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.857 13:12:42 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:34.857 13:12:42 rpc -- scripts/common.sh@345 -- # : 1 00:04:34.857 13:12:42 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.857 13:12:42 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.857 13:12:42 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:34.857 13:12:42 rpc -- scripts/common.sh@353 -- # local d=1 00:04:34.857 13:12:42 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.857 13:12:42 rpc -- scripts/common.sh@355 -- # echo 1 00:04:34.857 13:12:42 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.857 13:12:42 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:34.857 13:12:42 rpc -- scripts/common.sh@353 -- # local d=2 00:04:34.857 13:12:42 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.857 13:12:42 rpc -- scripts/common.sh@355 -- # echo 2 00:04:34.857 13:12:42 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.857 13:12:42 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.857 13:12:42 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.857 13:12:42 rpc -- scripts/common.sh@368 -- # return 0 00:04:34.857 13:12:42 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.857 13:12:42 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.857 --rc genhtml_branch_coverage=1 00:04:34.857 --rc genhtml_function_coverage=1 00:04:34.857 --rc genhtml_legend=1 00:04:34.857 --rc geninfo_all_blocks=1 00:04:34.857 --rc geninfo_unexecuted_blocks=1 00:04:34.857 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:34.857 ' 00:04:34.857 13:12:42 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.857 --rc genhtml_branch_coverage=1 00:04:34.857 --rc genhtml_function_coverage=1 00:04:34.857 --rc genhtml_legend=1 00:04:34.857 --rc geninfo_all_blocks=1 00:04:34.857 --rc geninfo_unexecuted_blocks=1 00:04:34.857 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:34.857 ' 00:04:34.857 13:12:42 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.857 --rc genhtml_branch_coverage=1 00:04:34.857 --rc genhtml_function_coverage=1 00:04:34.857 --rc genhtml_legend=1 00:04:34.857 --rc geninfo_all_blocks=1 00:04:34.857 --rc geninfo_unexecuted_blocks=1 00:04:34.857 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:34.857 ' 00:04:34.857 13:12:42 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.857 --rc genhtml_branch_coverage=1 00:04:34.857 --rc genhtml_function_coverage=1 00:04:34.857 --rc genhtml_legend=1 00:04:34.857 --rc geninfo_all_blocks=1 00:04:34.857 --rc geninfo_unexecuted_blocks=1 00:04:34.857 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:34.857 ' 00:04:34.857 13:12:42 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3826594 00:04:34.857 13:12:42 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.857 13:12:42 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:34.857 13:12:42 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3826594 00:04:34.857 13:12:42 rpc -- common/autotest_common.sh@831 -- # '[' -z 3826594 ']' 00:04:34.857 13:12:42 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.857 13:12:42 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:34.857 13:12:42 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.857 13:12:42 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:34.857 13:12:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.857 [2024-10-17 13:12:42.701558] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:04:34.857 [2024-10-17 13:12:42.701639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3826594 ] 00:04:34.857 [2024-10-17 13:12:42.768970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.857 [2024-10-17 13:12:42.807221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:34.857 [2024-10-17 13:12:42.807262] app.c: 616:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3826594' to capture a snapshot of events at runtime. 00:04:34.857 [2024-10-17 13:12:42.807271] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:34.857 [2024-10-17 13:12:42.807279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:34.857 [2024-10-17 13:12:42.807286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3826594 for offline analysis/debug. 00:04:34.857 [2024-10-17 13:12:42.807899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.116 13:12:43 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.116 13:12:43 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:35.116 13:12:43 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:35.117 13:12:43 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:35.117 13:12:43 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:35.117 13:12:43 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:35.117 13:12:43 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.117 13:12:43 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.117 13:12:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.117 ************************************ 00:04:35.117 START TEST rpc_integrity 00:04:35.117 ************************************ 00:04:35.117 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:35.117 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:35.117 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.117 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.117 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.117 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:35.117 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:35.117 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:35.117 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:35.117 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.117 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.117 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.117 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:35.117 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:35.117 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.117 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.117 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.117 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:35.117 { 00:04:35.117 "name": "Malloc0", 00:04:35.117 "aliases": [ 00:04:35.117 "87d57717-f942-4612-8779-7a730e6b8778" 00:04:35.117 ], 00:04:35.117 "product_name": "Malloc disk", 00:04:35.117 "block_size": 512, 00:04:35.117 "num_blocks": 16384, 00:04:35.117 "uuid": "87d57717-f942-4612-8779-7a730e6b8778", 00:04:35.117 "assigned_rate_limits": { 00:04:35.117 "rw_ios_per_sec": 0, 00:04:35.117 "rw_mbytes_per_sec": 0, 00:04:35.117 "r_mbytes_per_sec": 0, 00:04:35.117 "w_mbytes_per_sec": 0 00:04:35.117 }, 00:04:35.117 "claimed": false, 00:04:35.117 "zoned": false, 00:04:35.117 "supported_io_types": { 00:04:35.117 "read": true, 00:04:35.117 "write": true, 00:04:35.117 "unmap": true, 00:04:35.117 "flush": true, 00:04:35.117 "reset": true, 00:04:35.117 "nvme_admin": false, 00:04:35.117 "nvme_io": false, 00:04:35.117 "nvme_io_md": false, 00:04:35.117 "write_zeroes": true, 00:04:35.117 "zcopy": true, 00:04:35.117 "get_zone_info": false, 00:04:35.117 "zone_management": false, 00:04:35.117 "zone_append": false, 00:04:35.117 "compare": false, 00:04:35.117 "compare_and_write": false, 00:04:35.117 "abort": true, 00:04:35.117 "seek_hole": false, 00:04:35.117 "seek_data": false, 00:04:35.117 "copy": true, 00:04:35.117 "nvme_iov_md": false 00:04:35.117 }, 00:04:35.117 "memory_domains": [ 00:04:35.117 { 00:04:35.117 "dma_device_id": "system", 00:04:35.117 "dma_device_type": 1 00:04:35.117 }, 00:04:35.117 { 00:04:35.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.117 "dma_device_type": 2 00:04:35.117 } 00:04:35.117 ], 00:04:35.117 "driver_specific": {} 00:04:35.117 } 00:04:35.117 ]' 00:04:35.117 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:35.376 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:35.377 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:35.377 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.377 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.377 [2024-10-17 13:12:43.177578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:35.377 [2024-10-17 13:12:43.177611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:35.377 [2024-10-17 13:12:43.177630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x569f610 00:04:35.377 [2024-10-17 13:12:43.177639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:35.377 [2024-10-17 13:12:43.178542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:35.377 [2024-10-17 13:12:43.178564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:35.377 Passthru0 00:04:35.377 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.377 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:35.377 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.377 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.377 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.377 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:35.377 { 00:04:35.377 "name": "Malloc0", 00:04:35.377 "aliases": [ 00:04:35.377 "87d57717-f942-4612-8779-7a730e6b8778" 00:04:35.377 ], 00:04:35.377 "product_name": "Malloc disk", 00:04:35.377 "block_size": 512, 00:04:35.377 "num_blocks": 16384, 00:04:35.377 "uuid": "87d57717-f942-4612-8779-7a730e6b8778", 00:04:35.377 "assigned_rate_limits": { 00:04:35.377 "rw_ios_per_sec": 0, 00:04:35.377 "rw_mbytes_per_sec": 0, 00:04:35.377 "r_mbytes_per_sec": 0, 00:04:35.377 "w_mbytes_per_sec": 0 00:04:35.377 }, 00:04:35.377 "claimed": true, 00:04:35.377 "claim_type": "exclusive_write", 00:04:35.377 "zoned": false, 00:04:35.377 "supported_io_types": { 00:04:35.377 "read": true, 00:04:35.377 "write": true, 00:04:35.377 "unmap": true, 00:04:35.377 "flush": true, 00:04:35.377 "reset": true, 00:04:35.377 "nvme_admin": false, 00:04:35.377 "nvme_io": false, 00:04:35.377 "nvme_io_md": false, 00:04:35.377 "write_zeroes": true, 00:04:35.377 "zcopy": true, 00:04:35.377 "get_zone_info": false, 00:04:35.377 "zone_management": false, 00:04:35.377 "zone_append": false, 00:04:35.377 "compare": false, 00:04:35.377 "compare_and_write": false, 00:04:35.377 "abort": true, 00:04:35.377 "seek_hole": false, 00:04:35.377 "seek_data": false, 00:04:35.377 "copy": true, 00:04:35.377 "nvme_iov_md": false 00:04:35.377 }, 00:04:35.377 "memory_domains": [ 00:04:35.377 { 00:04:35.377 "dma_device_id": "system", 00:04:35.377 "dma_device_type": 1 00:04:35.377 }, 00:04:35.377 { 00:04:35.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.377 "dma_device_type": 2 00:04:35.377 } 00:04:35.377 ], 00:04:35.377 "driver_specific": {} 00:04:35.377 }, 00:04:35.377 { 00:04:35.377 "name": "Passthru0", 00:04:35.377 "aliases": [ 00:04:35.377 "5febb30c-7e0b-5d63-82a3-8b6e81b024ec" 00:04:35.377 ], 00:04:35.377 "product_name": "passthru", 00:04:35.377 "block_size": 512, 00:04:35.377 "num_blocks": 16384, 00:04:35.377 "uuid": "5febb30c-7e0b-5d63-82a3-8b6e81b024ec", 00:04:35.377 "assigned_rate_limits": { 00:04:35.377 "rw_ios_per_sec": 0, 00:04:35.377 "rw_mbytes_per_sec": 0, 00:04:35.377 "r_mbytes_per_sec": 0, 00:04:35.377 "w_mbytes_per_sec": 0 00:04:35.377 }, 00:04:35.377 "claimed": false, 00:04:35.377 "zoned": false, 00:04:35.377 "supported_io_types": { 00:04:35.377 "read": true, 00:04:35.377 "write": true, 00:04:35.377 "unmap": true, 00:04:35.377 "flush": true, 00:04:35.377 "reset": true, 00:04:35.377 "nvme_admin": false, 00:04:35.377 "nvme_io": false, 00:04:35.377 "nvme_io_md": false, 00:04:35.377 "write_zeroes": true, 00:04:35.377 "zcopy": true, 00:04:35.377 "get_zone_info": false, 00:04:35.377 "zone_management": false, 00:04:35.377 "zone_append": false, 00:04:35.377 "compare": false, 00:04:35.377 "compare_and_write": false, 00:04:35.377 "abort": true, 00:04:35.377 "seek_hole": false, 00:04:35.377 "seek_data": false, 00:04:35.377 "copy": true, 00:04:35.377 "nvme_iov_md": false 00:04:35.377 }, 00:04:35.377 "memory_domains": [ 00:04:35.377 { 00:04:35.377 "dma_device_id": "system", 00:04:35.377 "dma_device_type": 1 00:04:35.377 }, 00:04:35.377 { 00:04:35.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.377 "dma_device_type": 2 00:04:35.377 } 00:04:35.377 ], 00:04:35.377 "driver_specific": { 00:04:35.377 "passthru": { 00:04:35.377 "name": "Passthru0", 00:04:35.377 "base_bdev_name": "Malloc0" 00:04:35.377 } 00:04:35.377 } 00:04:35.377 } 00:04:35.377 ]' 00:04:35.377 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:35.377 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.377 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.377 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.377 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.377 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.377 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:35.377 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.377 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.377 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.377 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.377 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.377 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.377 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.377 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.377 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:35.377 13:12:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:35.377 00:04:35.377 real 0m0.272s 00:04:35.377 user 0m0.171s 00:04:35.377 sys 0m0.046s 00:04:35.377 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.377 13:12:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.377 ************************************ 00:04:35.377 END TEST rpc_integrity 00:04:35.377 ************************************ 00:04:35.377 13:12:43 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:35.377 13:12:43 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.377 13:12:43 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.377 13:12:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.377 ************************************ 00:04:35.377 START TEST rpc_plugins 00:04:35.377 ************************************ 00:04:35.377 13:12:43 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:35.377 13:12:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:35.377 13:12:43 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.377 13:12:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.377 13:12:43 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.377 13:12:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:35.377 13:12:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:35.377 13:12:43 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.377 13:12:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.635 13:12:43 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.635 13:12:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:35.635 { 00:04:35.635 "name": "Malloc1", 00:04:35.635 "aliases": [ 00:04:35.635 "3e2c16ba-4687-40f9-98a8-b81890393741" 00:04:35.635 ], 00:04:35.635 "product_name": "Malloc disk", 00:04:35.635 "block_size": 4096, 00:04:35.635 "num_blocks": 256, 00:04:35.635 "uuid": "3e2c16ba-4687-40f9-98a8-b81890393741", 00:04:35.635 "assigned_rate_limits": { 00:04:35.635 "rw_ios_per_sec": 0, 00:04:35.635 "rw_mbytes_per_sec": 0, 00:04:35.635 "r_mbytes_per_sec": 0, 00:04:35.635 "w_mbytes_per_sec": 0 00:04:35.635 }, 00:04:35.635 "claimed": false, 00:04:35.635 "zoned": false, 00:04:35.635 "supported_io_types": { 00:04:35.635 "read": true, 00:04:35.635 "write": true, 00:04:35.635 "unmap": true, 00:04:35.635 "flush": true, 00:04:35.635 "reset": true, 00:04:35.635 "nvme_admin": false, 00:04:35.635 "nvme_io": false, 00:04:35.635 "nvme_io_md": false, 00:04:35.635 "write_zeroes": true, 00:04:35.635 "zcopy": true, 00:04:35.635 "get_zone_info": false, 00:04:35.635 "zone_management": false, 00:04:35.635 "zone_append": false, 00:04:35.635 "compare": false, 00:04:35.635 "compare_and_write": false, 00:04:35.635 "abort": true, 00:04:35.635 "seek_hole": false, 00:04:35.635 "seek_data": false, 00:04:35.635 "copy": true, 00:04:35.635 "nvme_iov_md": false 00:04:35.635 }, 00:04:35.635 "memory_domains": [ 00:04:35.635 { 00:04:35.635 "dma_device_id": "system", 00:04:35.635 "dma_device_type": 1 00:04:35.635 }, 00:04:35.635 { 00:04:35.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.635 "dma_device_type": 2 00:04:35.635 } 00:04:35.635 ], 00:04:35.635 "driver_specific": {} 00:04:35.635 } 00:04:35.635 ]' 00:04:35.635 13:12:43 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:35.635 13:12:43 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:35.635 13:12:43 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:35.635 13:12:43 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.635 13:12:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.635 13:12:43 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.635 13:12:43 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:35.635 13:12:43 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.635 13:12:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.636 13:12:43 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.636 13:12:43 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:35.636 13:12:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:35.636 13:12:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:35.636 00:04:35.636 real 0m0.139s 00:04:35.636 user 0m0.079s 00:04:35.636 sys 0m0.028s 00:04:35.636 13:12:43 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.636 13:12:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.636 ************************************ 00:04:35.636 END TEST rpc_plugins 00:04:35.636 ************************************ 00:04:35.636 13:12:43 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:35.636 13:12:43 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.636 13:12:43 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.636 13:12:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.636 ************************************ 00:04:35.636 START TEST rpc_trace_cmd_test 00:04:35.636 ************************************ 00:04:35.636 13:12:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:35.636 13:12:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:35.636 13:12:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:35.636 13:12:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.636 13:12:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:35.636 13:12:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.636 13:12:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:35.636 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3826594", 00:04:35.636 "tpoint_group_mask": "0x8", 00:04:35.636 "iscsi_conn": { 00:04:35.636 "mask": "0x2", 00:04:35.636 "tpoint_mask": "0x0" 00:04:35.636 }, 00:04:35.636 "scsi": { 00:04:35.636 "mask": "0x4", 00:04:35.636 "tpoint_mask": "0x0" 00:04:35.636 }, 00:04:35.636 "bdev": { 00:04:35.636 "mask": "0x8", 00:04:35.636 "tpoint_mask": "0xffffffffffffffff" 00:04:35.636 }, 00:04:35.636 "nvmf_rdma": { 00:04:35.636 "mask": "0x10", 00:04:35.636 "tpoint_mask": "0x0" 00:04:35.636 }, 00:04:35.636 "nvmf_tcp": { 00:04:35.636 "mask": "0x20", 00:04:35.636 "tpoint_mask": "0x0" 00:04:35.636 }, 00:04:35.636 "ftl": { 00:04:35.636 "mask": "0x40", 00:04:35.636 "tpoint_mask": "0x0" 00:04:35.636 }, 00:04:35.636 "blobfs": { 00:04:35.636 "mask": "0x80", 00:04:35.636 "tpoint_mask": "0x0" 00:04:35.636 }, 00:04:35.636 "dsa": { 00:04:35.636 "mask": "0x200", 00:04:35.636 "tpoint_mask": "0x0" 00:04:35.636 }, 00:04:35.636 "thread": { 00:04:35.636 "mask": "0x400", 00:04:35.636 "tpoint_mask": "0x0" 00:04:35.636 }, 00:04:35.636 "nvme_pcie": { 00:04:35.636 "mask": "0x800", 00:04:35.636 "tpoint_mask": "0x0" 00:04:35.636 }, 00:04:35.636 "iaa": { 00:04:35.636 "mask": "0x1000", 00:04:35.636 "tpoint_mask": "0x0" 00:04:35.636 }, 00:04:35.636 "nvme_tcp": { 00:04:35.636 "mask": "0x2000", 00:04:35.636 "tpoint_mask": "0x0" 00:04:35.636 }, 00:04:35.636 "bdev_nvme": { 00:04:35.636 "mask": "0x4000", 00:04:35.636 "tpoint_mask": "0x0" 00:04:35.636 }, 00:04:35.636 "sock": { 00:04:35.636 "mask": "0x8000", 00:04:35.636 "tpoint_mask": "0x0" 00:04:35.636 }, 00:04:35.636 "blob": { 00:04:35.636 "mask": "0x10000", 00:04:35.636 "tpoint_mask": "0x0" 00:04:35.636 }, 00:04:35.636 "bdev_raid": { 00:04:35.636 "mask": "0x20000", 00:04:35.636 "tpoint_mask": "0x0" 00:04:35.636 }, 00:04:35.636 "scheduler": { 00:04:35.636 "mask": "0x40000", 00:04:35.636 "tpoint_mask": "0x0" 00:04:35.636 } 00:04:35.636 }' 00:04:35.636 13:12:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:35.636 13:12:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:35.636 13:12:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:35.895 13:12:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:35.895 13:12:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:35.895 13:12:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:35.895 13:12:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:35.895 13:12:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:35.895 13:12:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:35.895 13:12:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:35.895 00:04:35.895 real 0m0.225s 00:04:35.895 user 0m0.186s 00:04:35.895 sys 0m0.033s 00:04:35.895 13:12:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.895 13:12:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:35.895 ************************************ 00:04:35.895 END TEST rpc_trace_cmd_test 00:04:35.895 ************************************ 00:04:35.895 13:12:43 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:35.895 13:12:43 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:35.895 13:12:43 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:35.895 13:12:43 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.895 13:12:43 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.895 13:12:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.895 ************************************ 00:04:35.895 START TEST rpc_daemon_integrity 00:04:35.895 ************************************ 00:04:35.895 13:12:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:35.895 13:12:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:35.895 13:12:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.895 13:12:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.895 13:12:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.895 13:12:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:35.895 13:12:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:36.155 13:12:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:36.155 13:12:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:36.155 13:12:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.155 13:12:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.155 13:12:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.155 13:12:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:36.155 13:12:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:36.155 13:12:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.155 13:12:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.155 13:12:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.155 13:12:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:36.155 { 00:04:36.155 "name": "Malloc2", 00:04:36.155 "aliases": [ 00:04:36.155 "0ab3a14c-7083-4c86-9c59-1c704f6bf268" 00:04:36.155 ], 00:04:36.155 "product_name": "Malloc disk", 00:04:36.155 "block_size": 512, 00:04:36.155 "num_blocks": 16384, 00:04:36.155 "uuid": "0ab3a14c-7083-4c86-9c59-1c704f6bf268", 00:04:36.155 "assigned_rate_limits": { 00:04:36.155 "rw_ios_per_sec": 0, 00:04:36.155 "rw_mbytes_per_sec": 0, 00:04:36.155 "r_mbytes_per_sec": 0, 00:04:36.155 "w_mbytes_per_sec": 0 00:04:36.155 }, 00:04:36.155 "claimed": false, 00:04:36.155 "zoned": false, 00:04:36.155 "supported_io_types": { 00:04:36.155 "read": true, 00:04:36.155 "write": true, 00:04:36.155 "unmap": true, 00:04:36.155 "flush": true, 00:04:36.155 "reset": true, 00:04:36.155 "nvme_admin": false, 00:04:36.155 "nvme_io": false, 00:04:36.155 "nvme_io_md": false, 00:04:36.155 "write_zeroes": true, 00:04:36.155 "zcopy": true, 00:04:36.155 "get_zone_info": false, 00:04:36.155 "zone_management": false, 00:04:36.155 "zone_append": false, 00:04:36.155 "compare": false, 00:04:36.155 "compare_and_write": false, 00:04:36.155 "abort": true, 00:04:36.155 "seek_hole": false, 00:04:36.155 "seek_data": false, 00:04:36.155 "copy": true, 00:04:36.155 "nvme_iov_md": false 00:04:36.155 }, 00:04:36.155 "memory_domains": [ 00:04:36.155 { 00:04:36.155 "dma_device_id": "system", 00:04:36.155 "dma_device_type": 1 00:04:36.155 }, 00:04:36.155 { 00:04:36.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.155 "dma_device_type": 2 00:04:36.155 } 00:04:36.155 ], 00:04:36.155 "driver_specific": {} 00:04:36.155 } 00:04:36.155 ]' 00:04:36.155 13:12:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:36.155 13:12:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:36.155 13:12:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:36.155 13:12:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.155 13:12:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.155 [2024-10-17 13:12:44.059852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:36.155 [2024-10-17 13:12:44.059884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:36.155 [2024-10-17 13:12:44.059901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x57c1330 00:04:36.155 [2024-10-17 13:12:44.059911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:36.155 [2024-10-17 13:12:44.060794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:36.155 [2024-10-17 13:12:44.060817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:36.155 Passthru0 00:04:36.155 13:12:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.155 13:12:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:36.155 13:12:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.155 13:12:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.155 13:12:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.155 13:12:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:36.155 { 00:04:36.155 "name": "Malloc2", 00:04:36.155 "aliases": [ 00:04:36.155 "0ab3a14c-7083-4c86-9c59-1c704f6bf268" 00:04:36.155 ], 00:04:36.155 "product_name": "Malloc disk", 00:04:36.155 "block_size": 512, 00:04:36.155 "num_blocks": 16384, 00:04:36.155 "uuid": "0ab3a14c-7083-4c86-9c59-1c704f6bf268", 00:04:36.155 "assigned_rate_limits": { 00:04:36.155 "rw_ios_per_sec": 0, 00:04:36.155 "rw_mbytes_per_sec": 0, 00:04:36.155 "r_mbytes_per_sec": 0, 00:04:36.155 "w_mbytes_per_sec": 0 00:04:36.155 }, 00:04:36.155 "claimed": true, 00:04:36.156 "claim_type": "exclusive_write", 00:04:36.156 "zoned": false, 00:04:36.156 "supported_io_types": { 00:04:36.156 "read": true, 00:04:36.156 "write": true, 00:04:36.156 "unmap": true, 00:04:36.156 "flush": true, 00:04:36.156 "reset": true, 00:04:36.156 "nvme_admin": false, 00:04:36.156 "nvme_io": false, 00:04:36.156 "nvme_io_md": false, 00:04:36.156 "write_zeroes": true, 00:04:36.156 "zcopy": true, 00:04:36.156 "get_zone_info": false, 00:04:36.156 "zone_management": false, 00:04:36.156 "zone_append": false, 00:04:36.156 "compare": false, 00:04:36.156 "compare_and_write": false, 00:04:36.156 "abort": true, 00:04:36.156 "seek_hole": false, 00:04:36.156 "seek_data": false, 00:04:36.156 "copy": true, 00:04:36.156 "nvme_iov_md": false 00:04:36.156 }, 00:04:36.156 "memory_domains": [ 00:04:36.156 { 00:04:36.156 "dma_device_id": "system", 00:04:36.156 "dma_device_type": 1 00:04:36.156 }, 00:04:36.156 { 00:04:36.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.156 "dma_device_type": 2 00:04:36.156 } 00:04:36.156 ], 00:04:36.156 "driver_specific": {} 00:04:36.156 }, 00:04:36.156 { 00:04:36.156 "name": "Passthru0", 00:04:36.156 "aliases": [ 00:04:36.156 "e9b91221-9fc6-5853-b38a-65ce8a712ce4" 00:04:36.156 ], 00:04:36.156 "product_name": "passthru", 00:04:36.156 "block_size": 512, 00:04:36.156 "num_blocks": 16384, 00:04:36.156 "uuid": "e9b91221-9fc6-5853-b38a-65ce8a712ce4", 00:04:36.156 "assigned_rate_limits": { 00:04:36.156 "rw_ios_per_sec": 0, 00:04:36.156 "rw_mbytes_per_sec": 0, 00:04:36.156 "r_mbytes_per_sec": 0, 00:04:36.156 "w_mbytes_per_sec": 0 00:04:36.156 }, 00:04:36.156 "claimed": false, 00:04:36.156 "zoned": false, 00:04:36.156 "supported_io_types": { 00:04:36.156 "read": true, 00:04:36.156 "write": true, 00:04:36.156 "unmap": true, 00:04:36.156 "flush": true, 00:04:36.156 "reset": true, 00:04:36.156 "nvme_admin": false, 00:04:36.156 "nvme_io": false, 00:04:36.156 "nvme_io_md": false, 00:04:36.156 "write_zeroes": true, 00:04:36.156 "zcopy": true, 00:04:36.156 "get_zone_info": false, 00:04:36.156 "zone_management": false, 00:04:36.156 "zone_append": false, 00:04:36.156 "compare": false, 00:04:36.156 "compare_and_write": false, 00:04:36.156 "abort": true, 00:04:36.156 "seek_hole": false, 00:04:36.156 "seek_data": false, 00:04:36.156 "copy": true, 00:04:36.156 "nvme_iov_md": false 00:04:36.156 }, 00:04:36.156 "memory_domains": [ 00:04:36.156 { 00:04:36.156 "dma_device_id": "system", 00:04:36.156 "dma_device_type": 1 00:04:36.156 }, 00:04:36.156 { 00:04:36.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.156 "dma_device_type": 2 00:04:36.156 } 00:04:36.156 ], 00:04:36.156 "driver_specific": { 00:04:36.156 "passthru": { 00:04:36.156 "name": "Passthru0", 00:04:36.156 "base_bdev_name": "Malloc2" 00:04:36.156 } 00:04:36.156 } 00:04:36.156 } 00:04:36.156 ]' 00:04:36.156 13:12:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:36.156 13:12:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:36.156 13:12:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:36.156 13:12:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.156 13:12:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.156 13:12:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.156 13:12:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:36.156 13:12:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.156 13:12:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.156 13:12:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.156 13:12:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:36.156 13:12:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.156 13:12:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.156 13:12:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.156 13:12:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:36.156 13:12:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:36.416 13:12:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:36.416 00:04:36.416 real 0m0.285s 00:04:36.416 user 0m0.187s 00:04:36.416 sys 0m0.044s 00:04:36.416 13:12:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.416 13:12:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.416 ************************************ 00:04:36.416 END TEST rpc_daemon_integrity 00:04:36.416 ************************************ 00:04:36.416 13:12:44 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:36.416 13:12:44 rpc -- rpc/rpc.sh@84 -- # killprocess 3826594 00:04:36.416 13:12:44 rpc -- common/autotest_common.sh@950 -- # '[' -z 3826594 ']' 00:04:36.416 13:12:44 rpc -- common/autotest_common.sh@954 -- # kill -0 3826594 00:04:36.416 13:12:44 rpc -- common/autotest_common.sh@955 -- # uname 00:04:36.416 13:12:44 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.416 13:12:44 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3826594 00:04:36.416 13:12:44 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:36.416 13:12:44 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:36.416 13:12:44 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3826594' 00:04:36.416 killing process with pid 3826594 00:04:36.416 13:12:44 rpc -- common/autotest_common.sh@969 -- # kill 3826594 00:04:36.416 13:12:44 rpc -- common/autotest_common.sh@974 -- # wait 3826594 00:04:36.675 00:04:36.675 real 0m2.128s 00:04:36.675 user 0m2.674s 00:04:36.675 sys 0m0.818s 00:04:36.675 13:12:44 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.675 13:12:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.675 ************************************ 00:04:36.675 END TEST rpc 00:04:36.675 ************************************ 00:04:36.676 13:12:44 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:36.676 13:12:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.676 13:12:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.676 13:12:44 -- common/autotest_common.sh@10 -- # set +x 00:04:36.676 ************************************ 00:04:36.676 START TEST skip_rpc 00:04:36.676 ************************************ 00:04:36.676 13:12:44 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:36.934 * Looking for test storage... 00:04:36.934 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:36.934 13:12:44 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:36.934 13:12:44 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:36.934 13:12:44 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:36.934 13:12:44 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.934 13:12:44 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:36.935 13:12:44 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:36.935 13:12:44 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.935 13:12:44 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:36.935 13:12:44 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.935 13:12:44 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.935 13:12:44 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.935 13:12:44 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:36.935 13:12:44 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.935 13:12:44 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:36.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.935 --rc genhtml_branch_coverage=1 00:04:36.935 --rc genhtml_function_coverage=1 00:04:36.935 --rc genhtml_legend=1 00:04:36.935 --rc geninfo_all_blocks=1 00:04:36.935 --rc geninfo_unexecuted_blocks=1 00:04:36.935 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:36.935 ' 00:04:36.935 13:12:44 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:36.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.935 --rc genhtml_branch_coverage=1 00:04:36.935 --rc genhtml_function_coverage=1 00:04:36.935 --rc genhtml_legend=1 00:04:36.935 --rc geninfo_all_blocks=1 00:04:36.935 --rc geninfo_unexecuted_blocks=1 00:04:36.935 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:36.935 ' 00:04:36.935 13:12:44 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:36.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.935 --rc genhtml_branch_coverage=1 00:04:36.935 --rc genhtml_function_coverage=1 00:04:36.935 --rc genhtml_legend=1 00:04:36.935 --rc geninfo_all_blocks=1 00:04:36.935 --rc geninfo_unexecuted_blocks=1 00:04:36.935 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:36.935 ' 00:04:36.935 13:12:44 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:36.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.935 --rc genhtml_branch_coverage=1 00:04:36.935 --rc genhtml_function_coverage=1 00:04:36.935 --rc genhtml_legend=1 00:04:36.935 --rc geninfo_all_blocks=1 00:04:36.935 --rc geninfo_unexecuted_blocks=1 00:04:36.935 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:36.935 ' 00:04:36.935 13:12:44 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:36.935 13:12:44 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:36.935 13:12:44 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:36.935 13:12:44 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.935 13:12:44 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.935 13:12:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.935 ************************************ 00:04:36.935 START TEST skip_rpc 00:04:36.935 ************************************ 00:04:36.935 13:12:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:36.935 13:12:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3827049 00:04:36.935 13:12:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.935 13:12:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:36.935 13:12:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:36.935 [2024-10-17 13:12:44.933045] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:04:36.935 [2024-10-17 13:12:44.933105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3827049 ] 00:04:37.194 [2024-10-17 13:12:44.997518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.194 [2024-10-17 13:12:45.036810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3827049 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3827049 ']' 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3827049 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3827049 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3827049' 00:04:42.465 killing process with pid 3827049 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3827049 00:04:42.465 13:12:49 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3827049 00:04:42.465 00:04:42.465 real 0m5.368s 00:04:42.465 user 0m5.127s 00:04:42.465 sys 0m0.283s 00:04:42.465 13:12:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.465 13:12:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.465 ************************************ 00:04:42.465 END TEST skip_rpc 00:04:42.465 ************************************ 00:04:42.465 13:12:50 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:42.465 13:12:50 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.465 13:12:50 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.465 13:12:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.465 ************************************ 00:04:42.465 START TEST skip_rpc_with_json 00:04:42.465 ************************************ 00:04:42.465 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:42.465 13:12:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:42.465 13:12:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3828132 00:04:42.465 13:12:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.465 13:12:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.465 13:12:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3828132 00:04:42.465 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 3828132 ']' 00:04:42.465 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.465 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:42.465 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.465 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:42.465 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.465 [2024-10-17 13:12:50.388343] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:04:42.465 [2024-10-17 13:12:50.388424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3828132 ] 00:04:42.465 [2024-10-17 13:12:50.457202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.465 [2024-10-17 13:12:50.497066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.724 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:42.724 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:42.724 13:12:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:42.724 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.724 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.724 [2024-10-17 13:12:50.706284] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:42.724 request: 00:04:42.724 { 00:04:42.724 "trtype": "tcp", 00:04:42.724 "method": "nvmf_get_transports", 00:04:42.724 "req_id": 1 00:04:42.724 } 00:04:42.724 Got JSON-RPC error response 00:04:42.724 response: 00:04:42.724 { 00:04:42.724 "code": -19, 00:04:42.724 "message": "No such device" 00:04:42.724 } 00:04:42.724 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:42.724 13:12:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:42.724 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.724 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.724 [2024-10-17 13:12:50.714361] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:42.724 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.724 13:12:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:42.724 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.724 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.983 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.983 13:12:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:42.983 { 00:04:42.983 "subsystems": [ 00:04:42.983 { 00:04:42.983 "subsystem": "scheduler", 00:04:42.983 "config": [ 00:04:42.983 { 00:04:42.983 "method": "framework_set_scheduler", 00:04:42.983 "params": { 00:04:42.983 "name": "static" 00:04:42.983 } 00:04:42.983 } 00:04:42.983 ] 00:04:42.983 }, 00:04:42.983 { 00:04:42.983 "subsystem": "vmd", 00:04:42.983 "config": [] 00:04:42.983 }, 00:04:42.983 { 00:04:42.983 "subsystem": "sock", 00:04:42.983 "config": [ 00:04:42.983 { 00:04:42.983 "method": "sock_set_default_impl", 00:04:42.983 "params": { 00:04:42.983 "impl_name": "posix" 00:04:42.983 } 00:04:42.983 }, 00:04:42.983 { 00:04:42.983 "method": "sock_impl_set_options", 00:04:42.983 "params": { 00:04:42.983 "impl_name": "ssl", 00:04:42.983 "recv_buf_size": 4096, 00:04:42.983 "send_buf_size": 4096, 00:04:42.983 "enable_recv_pipe": true, 00:04:42.983 "enable_quickack": false, 00:04:42.983 "enable_placement_id": 0, 00:04:42.983 "enable_zerocopy_send_server": true, 00:04:42.983 "enable_zerocopy_send_client": false, 00:04:42.983 "zerocopy_threshold": 0, 00:04:42.983 "tls_version": 0, 00:04:42.983 "enable_ktls": false 00:04:42.983 } 00:04:42.983 }, 00:04:42.983 { 00:04:42.983 "method": "sock_impl_set_options", 00:04:42.983 "params": { 00:04:42.983 "impl_name": "posix", 00:04:42.983 "recv_buf_size": 2097152, 00:04:42.983 "send_buf_size": 2097152, 00:04:42.983 "enable_recv_pipe": true, 00:04:42.983 "enable_quickack": false, 00:04:42.983 "enable_placement_id": 0, 00:04:42.983 "enable_zerocopy_send_server": true, 00:04:42.983 "enable_zerocopy_send_client": false, 00:04:42.983 "zerocopy_threshold": 0, 00:04:42.983 "tls_version": 0, 00:04:42.983 "enable_ktls": false 00:04:42.983 } 00:04:42.983 } 00:04:42.983 ] 00:04:42.983 }, 00:04:42.983 { 00:04:42.983 "subsystem": "iobuf", 00:04:42.983 "config": [ 00:04:42.983 { 00:04:42.983 "method": "iobuf_set_options", 00:04:42.983 "params": { 00:04:42.983 "small_pool_count": 8192, 00:04:42.983 "large_pool_count": 1024, 00:04:42.983 "small_bufsize": 8192, 00:04:42.983 "large_bufsize": 135168 00:04:42.983 } 00:04:42.983 } 00:04:42.983 ] 00:04:42.983 }, 00:04:42.983 { 00:04:42.983 "subsystem": "keyring", 00:04:42.983 "config": [] 00:04:42.983 }, 00:04:42.983 { 00:04:42.983 "subsystem": "vfio_user_target", 00:04:42.983 "config": null 00:04:42.983 }, 00:04:42.983 { 00:04:42.983 "subsystem": "fsdev", 00:04:42.983 "config": [ 00:04:42.983 { 00:04:42.983 "method": "fsdev_set_opts", 00:04:42.983 "params": { 00:04:42.983 "fsdev_io_pool_size": 65535, 00:04:42.983 "fsdev_io_cache_size": 256 00:04:42.983 } 00:04:42.983 } 00:04:42.983 ] 00:04:42.983 }, 00:04:42.983 { 00:04:42.983 "subsystem": "accel", 00:04:42.983 "config": [ 00:04:42.983 { 00:04:42.983 "method": "accel_set_options", 00:04:42.983 "params": { 00:04:42.983 "small_cache_size": 128, 00:04:42.983 "large_cache_size": 16, 00:04:42.983 "task_count": 2048, 00:04:42.983 "sequence_count": 2048, 00:04:42.983 "buf_count": 2048 00:04:42.983 } 00:04:42.983 } 00:04:42.983 ] 00:04:42.983 }, 00:04:42.983 { 00:04:42.983 "subsystem": "bdev", 00:04:42.983 "config": [ 00:04:42.983 { 00:04:42.983 "method": "bdev_set_options", 00:04:42.983 "params": { 00:04:42.983 "bdev_io_pool_size": 65535, 00:04:42.983 "bdev_io_cache_size": 256, 00:04:42.983 "bdev_auto_examine": true, 00:04:42.983 "iobuf_small_cache_size": 128, 00:04:42.983 "iobuf_large_cache_size": 16 00:04:42.983 } 00:04:42.983 }, 00:04:42.984 { 00:04:42.984 "method": "bdev_raid_set_options", 00:04:42.984 "params": { 00:04:42.984 "process_window_size_kb": 1024, 00:04:42.984 "process_max_bandwidth_mb_sec": 0 00:04:42.984 } 00:04:42.984 }, 00:04:42.984 { 00:04:42.984 "method": "bdev_nvme_set_options", 00:04:42.984 "params": { 00:04:42.984 "action_on_timeout": "none", 00:04:42.984 "timeout_us": 0, 00:04:42.984 "timeout_admin_us": 0, 00:04:42.984 "keep_alive_timeout_ms": 10000, 00:04:42.984 "arbitration_burst": 0, 00:04:42.984 "low_priority_weight": 0, 00:04:42.984 "medium_priority_weight": 0, 00:04:42.984 "high_priority_weight": 0, 00:04:42.984 "nvme_adminq_poll_period_us": 10000, 00:04:42.984 "nvme_ioq_poll_period_us": 0, 00:04:42.984 "io_queue_requests": 0, 00:04:42.984 "delay_cmd_submit": true, 00:04:42.984 "transport_retry_count": 4, 00:04:42.984 "bdev_retry_count": 3, 00:04:42.984 "transport_ack_timeout": 0, 00:04:42.984 "ctrlr_loss_timeout_sec": 0, 00:04:42.984 "reconnect_delay_sec": 0, 00:04:42.984 "fast_io_fail_timeout_sec": 0, 00:04:42.984 "disable_auto_failback": false, 00:04:42.984 "generate_uuids": false, 00:04:42.984 "transport_tos": 0, 00:04:42.984 "nvme_error_stat": false, 00:04:42.984 "rdma_srq_size": 0, 00:04:42.984 "io_path_stat": false, 00:04:42.984 "allow_accel_sequence": false, 00:04:42.984 "rdma_max_cq_size": 0, 00:04:42.984 "rdma_cm_event_timeout_ms": 0, 00:04:42.984 "dhchap_digests": [ 00:04:42.984 "sha256", 00:04:42.984 "sha384", 00:04:42.984 "sha512" 00:04:42.984 ], 00:04:42.984 "dhchap_dhgroups": [ 00:04:42.984 "null", 00:04:42.984 "ffdhe2048", 00:04:42.984 "ffdhe3072", 00:04:42.984 "ffdhe4096", 00:04:42.984 "ffdhe6144", 00:04:42.984 "ffdhe8192" 00:04:42.984 ] 00:04:42.984 } 00:04:42.984 }, 00:04:42.984 { 00:04:42.984 "method": "bdev_nvme_set_hotplug", 00:04:42.984 "params": { 00:04:42.984 "period_us": 100000, 00:04:42.984 "enable": false 00:04:42.984 } 00:04:42.984 }, 00:04:42.984 { 00:04:42.984 "method": "bdev_iscsi_set_options", 00:04:42.984 "params": { 00:04:42.984 "timeout_sec": 30 00:04:42.984 } 00:04:42.984 }, 00:04:42.984 { 00:04:42.984 "method": "bdev_wait_for_examine" 00:04:42.984 } 00:04:42.984 ] 00:04:42.984 }, 00:04:42.984 { 00:04:42.984 "subsystem": "nvmf", 00:04:42.984 "config": [ 00:04:42.984 { 00:04:42.984 "method": "nvmf_set_config", 00:04:42.984 "params": { 00:04:42.984 "discovery_filter": "match_any", 00:04:42.984 "admin_cmd_passthru": { 00:04:42.984 "identify_ctrlr": false 00:04:42.984 }, 00:04:42.984 "dhchap_digests": [ 00:04:42.984 "sha256", 00:04:42.984 "sha384", 00:04:42.984 "sha512" 00:04:42.984 ], 00:04:42.984 "dhchap_dhgroups": [ 00:04:42.984 "null", 00:04:42.984 "ffdhe2048", 00:04:42.984 "ffdhe3072", 00:04:42.984 "ffdhe4096", 00:04:42.984 "ffdhe6144", 00:04:42.984 "ffdhe8192" 00:04:42.984 ] 00:04:42.984 } 00:04:42.984 }, 00:04:42.984 { 00:04:42.984 "method": "nvmf_set_max_subsystems", 00:04:42.984 "params": { 00:04:42.984 "max_subsystems": 1024 00:04:42.984 } 00:04:42.984 }, 00:04:42.984 { 00:04:42.984 "method": "nvmf_set_crdt", 00:04:42.984 "params": { 00:04:42.984 "crdt1": 0, 00:04:42.984 "crdt2": 0, 00:04:42.984 "crdt3": 0 00:04:42.984 } 00:04:42.984 }, 00:04:42.984 { 00:04:42.984 "method": "nvmf_create_transport", 00:04:42.984 "params": { 00:04:42.984 "trtype": "TCP", 00:04:42.984 "max_queue_depth": 128, 00:04:42.984 "max_io_qpairs_per_ctrlr": 127, 00:04:42.984 "in_capsule_data_size": 4096, 00:04:42.984 "max_io_size": 131072, 00:04:42.984 "io_unit_size": 131072, 00:04:42.984 "max_aq_depth": 128, 00:04:42.984 "num_shared_buffers": 511, 00:04:42.984 "buf_cache_size": 4294967295, 00:04:42.984 "dif_insert_or_strip": false, 00:04:42.984 "zcopy": false, 00:04:42.984 "c2h_success": true, 00:04:42.984 "sock_priority": 0, 00:04:42.984 "abort_timeout_sec": 1, 00:04:42.984 "ack_timeout": 0, 00:04:42.984 "data_wr_pool_size": 0 00:04:42.984 } 00:04:42.984 } 00:04:42.984 ] 00:04:42.984 }, 00:04:42.984 { 00:04:42.984 "subsystem": "nbd", 00:04:42.984 "config": [] 00:04:42.984 }, 00:04:42.984 { 00:04:42.984 "subsystem": "ublk", 00:04:42.984 "config": [] 00:04:42.984 }, 00:04:42.984 { 00:04:42.984 "subsystem": "vhost_blk", 00:04:42.984 "config": [] 00:04:42.984 }, 00:04:42.984 { 00:04:42.984 "subsystem": "scsi", 00:04:42.984 "config": null 00:04:42.984 }, 00:04:42.984 { 00:04:42.984 "subsystem": "iscsi", 00:04:42.984 "config": [ 00:04:42.984 { 00:04:42.984 "method": "iscsi_set_options", 00:04:42.984 "params": { 00:04:42.984 "node_base": "iqn.2016-06.io.spdk", 00:04:42.984 "max_sessions": 128, 00:04:42.984 "max_connections_per_session": 2, 00:04:42.984 "max_queue_depth": 64, 00:04:42.984 "default_time2wait": 2, 00:04:42.984 "default_time2retain": 20, 00:04:42.984 "first_burst_length": 8192, 00:04:42.984 "immediate_data": true, 00:04:42.984 "allow_duplicated_isid": false, 00:04:42.984 "error_recovery_level": 0, 00:04:42.984 "nop_timeout": 60, 00:04:42.984 "nop_in_interval": 30, 00:04:42.984 "disable_chap": false, 00:04:42.984 "require_chap": false, 00:04:42.984 "mutual_chap": false, 00:04:42.984 "chap_group": 0, 00:04:42.984 "max_large_datain_per_connection": 64, 00:04:42.984 "max_r2t_per_connection": 4, 00:04:42.984 "pdu_pool_size": 36864, 00:04:42.984 "immediate_data_pool_size": 16384, 00:04:42.984 "data_out_pool_size": 2048 00:04:42.984 } 00:04:42.984 } 00:04:42.984 ] 00:04:42.984 }, 00:04:42.984 { 00:04:42.984 "subsystem": "vhost_scsi", 00:04:42.984 "config": [] 00:04:42.984 } 00:04:42.984 ] 00:04:42.984 } 00:04:42.984 13:12:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:42.984 13:12:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3828132 00:04:42.984 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3828132 ']' 00:04:42.984 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3828132 00:04:42.984 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:42.984 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:42.984 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3828132 00:04:42.984 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.984 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:42.984 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3828132' 00:04:42.984 killing process with pid 3828132 00:04:42.984 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3828132 00:04:42.984 13:12:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3828132 00:04:43.243 13:12:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3828169 00:04:43.243 13:12:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:43.243 13:12:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:48.516 13:12:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3828169 00:04:48.516 13:12:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3828169 ']' 00:04:48.516 13:12:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3828169 00:04:48.516 13:12:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:48.516 13:12:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.516 13:12:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3828169 00:04:48.516 13:12:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:48.516 13:12:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:48.516 13:12:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3828169' 00:04:48.516 killing process with pid 3828169 00:04:48.516 13:12:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3828169 00:04:48.516 13:12:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3828169 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:48.775 00:04:48.775 real 0m6.226s 00:04:48.775 user 0m5.907s 00:04:48.775 sys 0m0.615s 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.775 ************************************ 00:04:48.775 END TEST skip_rpc_with_json 00:04:48.775 ************************************ 00:04:48.775 13:12:56 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:48.775 13:12:56 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.775 13:12:56 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.775 13:12:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.775 ************************************ 00:04:48.775 START TEST skip_rpc_with_delay 00:04:48.775 ************************************ 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.775 [2024-10-17 13:12:56.695287] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:48.775 00:04:48.775 real 0m0.044s 00:04:48.775 user 0m0.019s 00:04:48.775 sys 0m0.024s 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.775 13:12:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:48.775 ************************************ 00:04:48.775 END TEST skip_rpc_with_delay 00:04:48.775 ************************************ 00:04:48.775 13:12:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:48.775 13:12:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:48.775 13:12:56 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:48.775 13:12:56 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.775 13:12:56 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.775 13:12:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.775 ************************************ 00:04:48.775 START TEST exit_on_failed_rpc_init 00:04:48.775 ************************************ 00:04:48.775 13:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:48.775 13:12:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3829267 00:04:48.775 13:12:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3829267 00:04:48.775 13:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 3829267 ']' 00:04:48.775 13:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.775 13:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.775 13:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.775 13:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.775 13:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:48.775 13:12:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.775 [2024-10-17 13:12:56.819467] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:04:48.775 [2024-10-17 13:12:56.819546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3829267 ] 00:04:49.033 [2024-10-17 13:12:56.886473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.034 [2024-10-17 13:12:56.928810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.292 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.292 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:49.292 13:12:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.292 13:12:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:49.292 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:49.292 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:49.292 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.292 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.292 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.292 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.293 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.293 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.293 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.293 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:49.293 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:49.293 [2024-10-17 13:12:57.156006] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:04:49.293 [2024-10-17 13:12:57.156075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3829276 ] 00:04:49.293 [2024-10-17 13:12:57.221706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.293 [2024-10-17 13:12:57.261976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.293 [2024-10-17 13:12:57.262046] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:49.293 [2024-10-17 13:12:57.262060] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:49.293 [2024-10-17 13:12:57.262068] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:49.293 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:49.293 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:49.293 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:49.293 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:49.293 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:49.293 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:49.293 13:12:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:49.293 13:12:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3829267 00:04:49.293 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 3829267 ']' 00:04:49.293 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 3829267 00:04:49.293 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:49.293 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:49.293 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3829267 00:04:49.552 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:49.552 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:49.552 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3829267' 00:04:49.552 killing process with pid 3829267 00:04:49.552 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 3829267 00:04:49.552 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 3829267 00:04:49.811 00:04:49.811 real 0m0.860s 00:04:49.811 user 0m0.887s 00:04:49.811 sys 0m0.381s 00:04:49.811 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.811 13:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:49.811 ************************************ 00:04:49.811 END TEST exit_on_failed_rpc_init 00:04:49.811 ************************************ 00:04:49.811 13:12:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:49.811 00:04:49.811 real 0m13.011s 00:04:49.811 user 0m12.165s 00:04:49.811 sys 0m1.622s 00:04:49.811 13:12:57 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.811 13:12:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.811 ************************************ 00:04:49.811 END TEST skip_rpc 00:04:49.811 ************************************ 00:04:49.811 13:12:57 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:49.811 13:12:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.811 13:12:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.811 13:12:57 -- common/autotest_common.sh@10 -- # set +x 00:04:49.811 ************************************ 00:04:49.811 START TEST rpc_client 00:04:49.811 ************************************ 00:04:49.811 13:12:57 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:50.071 * Looking for test storage... 00:04:50.071 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:04:50.071 13:12:57 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:50.071 13:12:57 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:50.071 13:12:57 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:50.071 13:12:57 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.071 13:12:57 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:50.071 13:12:57 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.071 13:12:57 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:50.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.071 --rc genhtml_branch_coverage=1 00:04:50.071 --rc genhtml_function_coverage=1 00:04:50.071 --rc genhtml_legend=1 00:04:50.071 --rc geninfo_all_blocks=1 00:04:50.071 --rc geninfo_unexecuted_blocks=1 00:04:50.071 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:50.071 ' 00:04:50.071 13:12:57 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:50.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.071 --rc genhtml_branch_coverage=1 00:04:50.071 --rc genhtml_function_coverage=1 00:04:50.071 --rc genhtml_legend=1 00:04:50.071 --rc geninfo_all_blocks=1 00:04:50.071 --rc geninfo_unexecuted_blocks=1 00:04:50.071 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:50.071 ' 00:04:50.071 13:12:57 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:50.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.071 --rc genhtml_branch_coverage=1 00:04:50.071 --rc genhtml_function_coverage=1 00:04:50.071 --rc genhtml_legend=1 00:04:50.071 --rc geninfo_all_blocks=1 00:04:50.071 --rc geninfo_unexecuted_blocks=1 00:04:50.071 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:50.071 ' 00:04:50.071 13:12:57 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:50.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.071 --rc genhtml_branch_coverage=1 00:04:50.071 --rc genhtml_function_coverage=1 00:04:50.071 --rc genhtml_legend=1 00:04:50.071 --rc geninfo_all_blocks=1 00:04:50.071 --rc geninfo_unexecuted_blocks=1 00:04:50.071 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:50.071 ' 00:04:50.071 13:12:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:50.071 OK 00:04:50.071 13:12:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:50.071 00:04:50.071 real 0m0.210s 00:04:50.071 user 0m0.115s 00:04:50.071 sys 0m0.112s 00:04:50.071 13:12:57 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.071 13:12:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:50.071 ************************************ 00:04:50.071 END TEST rpc_client 00:04:50.071 ************************************ 00:04:50.071 13:12:58 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:50.071 13:12:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.071 13:12:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.071 13:12:58 -- common/autotest_common.sh@10 -- # set +x 00:04:50.071 ************************************ 00:04:50.071 START TEST json_config 00:04:50.071 ************************************ 00:04:50.071 13:12:58 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:50.332 13:12:58 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:50.332 13:12:58 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:50.332 13:12:58 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:50.332 13:12:58 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:50.332 13:12:58 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.332 13:12:58 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.332 13:12:58 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.332 13:12:58 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.332 13:12:58 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.332 13:12:58 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.332 13:12:58 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.332 13:12:58 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.332 13:12:58 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.332 13:12:58 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.332 13:12:58 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.332 13:12:58 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:50.332 13:12:58 json_config -- scripts/common.sh@345 -- # : 1 00:04:50.332 13:12:58 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.332 13:12:58 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.332 13:12:58 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:50.332 13:12:58 json_config -- scripts/common.sh@353 -- # local d=1 00:04:50.332 13:12:58 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.332 13:12:58 json_config -- scripts/common.sh@355 -- # echo 1 00:04:50.332 13:12:58 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.332 13:12:58 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:50.332 13:12:58 json_config -- scripts/common.sh@353 -- # local d=2 00:04:50.332 13:12:58 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.332 13:12:58 json_config -- scripts/common.sh@355 -- # echo 2 00:04:50.332 13:12:58 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.332 13:12:58 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.332 13:12:58 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.332 13:12:58 json_config -- scripts/common.sh@368 -- # return 0 00:04:50.332 13:12:58 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.332 13:12:58 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:50.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.332 --rc genhtml_branch_coverage=1 00:04:50.332 --rc genhtml_function_coverage=1 00:04:50.332 --rc genhtml_legend=1 00:04:50.332 --rc geninfo_all_blocks=1 00:04:50.332 --rc geninfo_unexecuted_blocks=1 00:04:50.332 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:50.332 ' 00:04:50.332 13:12:58 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:50.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.332 --rc genhtml_branch_coverage=1 00:04:50.332 --rc genhtml_function_coverage=1 00:04:50.332 --rc genhtml_legend=1 00:04:50.332 --rc geninfo_all_blocks=1 00:04:50.332 --rc geninfo_unexecuted_blocks=1 00:04:50.332 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:50.332 ' 00:04:50.332 13:12:58 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:50.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.332 --rc genhtml_branch_coverage=1 00:04:50.332 --rc genhtml_function_coverage=1 00:04:50.332 --rc genhtml_legend=1 00:04:50.332 --rc geninfo_all_blocks=1 00:04:50.332 --rc geninfo_unexecuted_blocks=1 00:04:50.332 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:50.332 ' 00:04:50.332 13:12:58 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:50.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.332 --rc genhtml_branch_coverage=1 00:04:50.332 --rc genhtml_function_coverage=1 00:04:50.332 --rc genhtml_legend=1 00:04:50.332 --rc geninfo_all_blocks=1 00:04:50.332 --rc geninfo_unexecuted_blocks=1 00:04:50.332 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:50.332 ' 00:04:50.332 13:12:58 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:50.332 13:12:58 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:50.332 13:12:58 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.332 13:12:58 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.332 13:12:58 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.332 13:12:58 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.332 13:12:58 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.332 13:12:58 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.332 13:12:58 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.332 13:12:58 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.332 13:12:58 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.332 13:12:58 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.332 13:12:58 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:50.332 13:12:58 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:50.332 13:12:58 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.332 13:12:58 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.332 13:12:58 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.332 13:12:58 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.332 13:12:58 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:50.332 13:12:58 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.332 13:12:58 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.332 13:12:58 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.332 13:12:58 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.332 13:12:58 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.332 13:12:58 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.332 13:12:58 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.332 13:12:58 json_config -- paths/export.sh@5 -- # export PATH 00:04:50.333 13:12:58 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.333 13:12:58 json_config -- nvmf/common.sh@51 -- # : 0 00:04:50.333 13:12:58 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:50.333 13:12:58 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:50.333 13:12:58 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.333 13:12:58 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.333 13:12:58 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.333 13:12:58 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:50.333 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:50.333 13:12:58 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:50.333 13:12:58 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:50.333 13:12:58 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:50.333 13:12:58 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:50.333 13:12:58 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:50.333 13:12:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:50.333 13:12:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:50.333 13:12:58 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:50.333 13:12:58 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:50.333 WARNING: No tests are enabled so not running JSON configuration tests 00:04:50.333 13:12:58 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:50.333 00:04:50.333 real 0m0.161s 00:04:50.333 user 0m0.088s 00:04:50.333 sys 0m0.080s 00:04:50.333 13:12:58 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.333 13:12:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.333 ************************************ 00:04:50.333 END TEST json_config 00:04:50.333 ************************************ 00:04:50.333 13:12:58 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:50.333 13:12:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.333 13:12:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.333 13:12:58 -- common/autotest_common.sh@10 -- # set +x 00:04:50.333 ************************************ 00:04:50.333 START TEST json_config_extra_key 00:04:50.333 ************************************ 00:04:50.333 13:12:58 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:50.333 13:12:58 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:50.333 13:12:58 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:50.333 13:12:58 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:50.593 13:12:58 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.593 13:12:58 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:50.593 13:12:58 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.593 13:12:58 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:50.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.593 --rc genhtml_branch_coverage=1 00:04:50.593 --rc genhtml_function_coverage=1 00:04:50.593 --rc genhtml_legend=1 00:04:50.593 --rc geninfo_all_blocks=1 00:04:50.593 --rc geninfo_unexecuted_blocks=1 00:04:50.593 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:50.593 ' 00:04:50.593 13:12:58 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:50.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.593 --rc genhtml_branch_coverage=1 00:04:50.593 --rc genhtml_function_coverage=1 00:04:50.593 --rc genhtml_legend=1 00:04:50.593 --rc geninfo_all_blocks=1 00:04:50.593 --rc geninfo_unexecuted_blocks=1 00:04:50.593 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:50.593 ' 00:04:50.593 13:12:58 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:50.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.593 --rc genhtml_branch_coverage=1 00:04:50.593 --rc genhtml_function_coverage=1 00:04:50.593 --rc genhtml_legend=1 00:04:50.594 --rc geninfo_all_blocks=1 00:04:50.594 --rc geninfo_unexecuted_blocks=1 00:04:50.594 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:50.594 ' 00:04:50.594 13:12:58 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:50.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.594 --rc genhtml_branch_coverage=1 00:04:50.594 --rc genhtml_function_coverage=1 00:04:50.594 --rc genhtml_legend=1 00:04:50.594 --rc geninfo_all_blocks=1 00:04:50.594 --rc geninfo_unexecuted_blocks=1 00:04:50.594 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:50.594 ' 00:04:50.594 13:12:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:50.594 13:12:58 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.594 13:12:58 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.594 13:12:58 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.594 13:12:58 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.594 13:12:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.594 13:12:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.594 13:12:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.594 13:12:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:50.594 13:12:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:50.594 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:50.594 13:12:58 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:50.594 13:12:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:50.594 13:12:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:50.594 13:12:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:50.594 13:12:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:50.594 13:12:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:50.594 13:12:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:50.594 13:12:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:50.594 13:12:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:50.594 13:12:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:50.594 13:12:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:50.594 13:12:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:50.594 INFO: launching applications... 00:04:50.594 13:12:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:50.594 13:12:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:50.594 13:12:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:50.594 13:12:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:50.594 13:12:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:50.594 13:12:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:50.594 13:12:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.594 13:12:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.594 13:12:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3829707 00:04:50.594 13:12:58 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:50.594 13:12:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:50.594 Waiting for target to run... 00:04:50.594 13:12:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3829707 /var/tmp/spdk_tgt.sock 00:04:50.594 13:12:58 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 3829707 ']' 00:04:50.594 13:12:58 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.594 13:12:58 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.594 13:12:58 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.594 13:12:58 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.594 13:12:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:50.594 [2024-10-17 13:12:58.495296] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:04:50.594 [2024-10-17 13:12:58.495361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3829707 ] 00:04:50.853 [2024-10-17 13:12:58.764639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.853 [2024-10-17 13:12:58.794717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.422 13:12:59 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.422 13:12:59 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:51.422 13:12:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:51.422 00:04:51.422 13:12:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:51.422 INFO: shutting down applications... 00:04:51.422 13:12:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:51.422 13:12:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:51.422 13:12:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:51.422 13:12:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3829707 ]] 00:04:51.422 13:12:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3829707 00:04:51.422 13:12:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:51.422 13:12:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.422 13:12:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3829707 00:04:51.422 13:12:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.992 13:12:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.992 13:12:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.992 13:12:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3829707 00:04:51.992 13:12:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:51.992 13:12:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:51.992 13:12:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:51.992 13:12:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:51.992 SPDK target shutdown done 00:04:51.992 13:12:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:51.992 Success 00:04:51.992 00:04:51.992 real 0m1.545s 00:04:51.992 user 0m1.293s 00:04:51.992 sys 0m0.409s 00:04:51.992 13:12:59 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.992 13:12:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:51.992 ************************************ 00:04:51.992 END TEST json_config_extra_key 00:04:51.992 ************************************ 00:04:51.992 13:12:59 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.992 13:12:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.992 13:12:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.992 13:12:59 -- common/autotest_common.sh@10 -- # set +x 00:04:51.992 ************************************ 00:04:51.992 START TEST alias_rpc 00:04:51.992 ************************************ 00:04:51.992 13:12:59 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.992 * Looking for test storage... 00:04:52.251 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:04:52.251 13:13:00 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:52.251 13:13:00 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:52.251 13:13:00 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:52.252 13:13:00 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.252 13:13:00 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:52.252 13:13:00 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.252 13:13:00 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:52.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.252 --rc genhtml_branch_coverage=1 00:04:52.252 --rc genhtml_function_coverage=1 00:04:52.252 --rc genhtml_legend=1 00:04:52.252 --rc geninfo_all_blocks=1 00:04:52.252 --rc geninfo_unexecuted_blocks=1 00:04:52.252 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:52.252 ' 00:04:52.252 13:13:00 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:52.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.252 --rc genhtml_branch_coverage=1 00:04:52.252 --rc genhtml_function_coverage=1 00:04:52.252 --rc genhtml_legend=1 00:04:52.252 --rc geninfo_all_blocks=1 00:04:52.252 --rc geninfo_unexecuted_blocks=1 00:04:52.252 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:52.252 ' 00:04:52.252 13:13:00 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:52.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.252 --rc genhtml_branch_coverage=1 00:04:52.252 --rc genhtml_function_coverage=1 00:04:52.252 --rc genhtml_legend=1 00:04:52.252 --rc geninfo_all_blocks=1 00:04:52.252 --rc geninfo_unexecuted_blocks=1 00:04:52.252 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:52.252 ' 00:04:52.252 13:13:00 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:52.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.252 --rc genhtml_branch_coverage=1 00:04:52.252 --rc genhtml_function_coverage=1 00:04:52.252 --rc genhtml_legend=1 00:04:52.252 --rc geninfo_all_blocks=1 00:04:52.252 --rc geninfo_unexecuted_blocks=1 00:04:52.252 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:52.252 ' 00:04:52.252 13:13:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:52.252 13:13:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3830035 00:04:52.252 13:13:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.252 13:13:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3830035 00:04:52.252 13:13:00 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 3830035 ']' 00:04:52.252 13:13:00 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.252 13:13:00 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:52.252 13:13:00 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.252 13:13:00 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:52.252 13:13:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.252 [2024-10-17 13:13:00.161486] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:04:52.252 [2024-10-17 13:13:00.161573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3830035 ] 00:04:52.252 [2024-10-17 13:13:00.227907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.252 [2024-10-17 13:13:00.272020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.511 13:13:00 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.512 13:13:00 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:52.512 13:13:00 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:52.771 13:13:00 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3830035 00:04:52.771 13:13:00 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 3830035 ']' 00:04:52.771 13:13:00 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 3830035 00:04:52.771 13:13:00 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:52.771 13:13:00 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.771 13:13:00 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3830035 00:04:52.771 13:13:00 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:52.771 13:13:00 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:52.771 13:13:00 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3830035' 00:04:52.771 killing process with pid 3830035 00:04:52.771 13:13:00 alias_rpc -- common/autotest_common.sh@969 -- # kill 3830035 00:04:52.771 13:13:00 alias_rpc -- common/autotest_common.sh@974 -- # wait 3830035 00:04:53.030 00:04:53.030 real 0m1.121s 00:04:53.030 user 0m1.141s 00:04:53.030 sys 0m0.446s 00:04:53.030 13:13:01 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.030 13:13:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.030 ************************************ 00:04:53.030 END TEST alias_rpc 00:04:53.030 ************************************ 00:04:53.290 13:13:01 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:53.290 13:13:01 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:53.290 13:13:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.290 13:13:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.290 13:13:01 -- common/autotest_common.sh@10 -- # set +x 00:04:53.290 ************************************ 00:04:53.290 START TEST spdkcli_tcp 00:04:53.290 ************************************ 00:04:53.290 13:13:01 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:53.290 * Looking for test storage... 00:04:53.290 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:04:53.291 13:13:01 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:53.291 13:13:01 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:53.291 13:13:01 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:53.291 13:13:01 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.291 13:13:01 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:53.291 13:13:01 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.291 13:13:01 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:53.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.291 --rc genhtml_branch_coverage=1 00:04:53.291 --rc genhtml_function_coverage=1 00:04:53.291 --rc genhtml_legend=1 00:04:53.291 --rc geninfo_all_blocks=1 00:04:53.291 --rc geninfo_unexecuted_blocks=1 00:04:53.291 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:53.291 ' 00:04:53.291 13:13:01 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:53.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.291 --rc genhtml_branch_coverage=1 00:04:53.291 --rc genhtml_function_coverage=1 00:04:53.291 --rc genhtml_legend=1 00:04:53.291 --rc geninfo_all_blocks=1 00:04:53.291 --rc geninfo_unexecuted_blocks=1 00:04:53.291 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:53.291 ' 00:04:53.291 13:13:01 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:53.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.291 --rc genhtml_branch_coverage=1 00:04:53.291 --rc genhtml_function_coverage=1 00:04:53.291 --rc genhtml_legend=1 00:04:53.291 --rc geninfo_all_blocks=1 00:04:53.291 --rc geninfo_unexecuted_blocks=1 00:04:53.291 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:53.291 ' 00:04:53.291 13:13:01 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:53.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.291 --rc genhtml_branch_coverage=1 00:04:53.291 --rc genhtml_function_coverage=1 00:04:53.291 --rc genhtml_legend=1 00:04:53.291 --rc geninfo_all_blocks=1 00:04:53.291 --rc geninfo_unexecuted_blocks=1 00:04:53.291 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:53.291 ' 00:04:53.291 13:13:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:04:53.291 13:13:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:53.291 13:13:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:04:53.291 13:13:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:53.291 13:13:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:53.291 13:13:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:53.291 13:13:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:53.291 13:13:01 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:53.291 13:13:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.291 13:13:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3830358 00:04:53.291 13:13:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:53.291 13:13:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3830358 00:04:53.291 13:13:01 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 3830358 ']' 00:04:53.291 13:13:01 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.291 13:13:01 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.291 13:13:01 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.291 13:13:01 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.291 13:13:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.550 [2024-10-17 13:13:01.353161] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:04:53.550 [2024-10-17 13:13:01.353227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3830358 ] 00:04:53.550 [2024-10-17 13:13:01.419359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.550 [2024-10-17 13:13:01.463779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.550 [2024-10-17 13:13:01.463782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.808 13:13:01 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:53.808 13:13:01 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:53.808 13:13:01 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3830362 00:04:53.809 13:13:01 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:53.809 13:13:01 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:53.809 [ 00:04:53.809 "spdk_get_version", 00:04:53.809 "rpc_get_methods", 00:04:53.809 "notify_get_notifications", 00:04:53.809 "notify_get_types", 00:04:53.809 "trace_get_info", 00:04:53.809 "trace_get_tpoint_group_mask", 00:04:53.809 "trace_disable_tpoint_group", 00:04:53.809 "trace_enable_tpoint_group", 00:04:53.809 "trace_clear_tpoint_mask", 00:04:53.809 "trace_set_tpoint_mask", 00:04:53.809 "fsdev_set_opts", 00:04:53.809 "fsdev_get_opts", 00:04:53.809 "framework_get_pci_devices", 00:04:53.809 "framework_get_config", 00:04:53.809 "framework_get_subsystems", 00:04:53.809 "vfu_tgt_set_base_path", 00:04:53.809 "keyring_get_keys", 00:04:53.809 "iobuf_get_stats", 00:04:53.809 "iobuf_set_options", 00:04:53.809 "sock_get_default_impl", 00:04:53.809 "sock_set_default_impl", 00:04:53.809 "sock_impl_set_options", 00:04:53.809 "sock_impl_get_options", 00:04:53.809 "vmd_rescan", 00:04:53.809 "vmd_remove_device", 00:04:53.809 "vmd_enable", 00:04:53.809 "accel_get_stats", 00:04:53.809 "accel_set_options", 00:04:53.809 "accel_set_driver", 00:04:53.809 "accel_crypto_key_destroy", 00:04:53.809 "accel_crypto_keys_get", 00:04:53.809 "accel_crypto_key_create", 00:04:53.809 "accel_assign_opc", 00:04:53.809 "accel_get_module_info", 00:04:53.809 "accel_get_opc_assignments", 00:04:53.809 "bdev_get_histogram", 00:04:53.809 "bdev_enable_histogram", 00:04:53.809 "bdev_set_qos_limit", 00:04:53.809 "bdev_set_qd_sampling_period", 00:04:53.809 "bdev_get_bdevs", 00:04:53.809 "bdev_reset_iostat", 00:04:53.809 "bdev_get_iostat", 00:04:53.809 "bdev_examine", 00:04:53.809 "bdev_wait_for_examine", 00:04:53.809 "bdev_set_options", 00:04:53.809 "scsi_get_devices", 00:04:53.809 "thread_set_cpumask", 00:04:53.809 "scheduler_set_options", 00:04:53.809 "framework_get_governor", 00:04:53.809 "framework_get_scheduler", 00:04:53.809 "framework_set_scheduler", 00:04:53.809 "framework_get_reactors", 00:04:53.809 "thread_get_io_channels", 00:04:53.809 "thread_get_pollers", 00:04:53.809 "thread_get_stats", 00:04:53.809 "framework_monitor_context_switch", 00:04:53.809 "spdk_kill_instance", 00:04:53.809 "log_enable_timestamps", 00:04:53.809 "log_get_flags", 00:04:53.809 "log_clear_flag", 00:04:53.809 "log_set_flag", 00:04:53.809 "log_get_level", 00:04:53.809 "log_set_level", 00:04:53.809 "log_get_print_level", 00:04:53.809 "log_set_print_level", 00:04:53.809 "framework_enable_cpumask_locks", 00:04:53.809 "framework_disable_cpumask_locks", 00:04:53.809 "framework_wait_init", 00:04:53.809 "framework_start_init", 00:04:53.809 "virtio_blk_create_transport", 00:04:53.809 "virtio_blk_get_transports", 00:04:53.809 "vhost_controller_set_coalescing", 00:04:53.809 "vhost_get_controllers", 00:04:53.809 "vhost_delete_controller", 00:04:53.809 "vhost_create_blk_controller", 00:04:53.809 "vhost_scsi_controller_remove_target", 00:04:53.809 "vhost_scsi_controller_add_target", 00:04:53.809 "vhost_start_scsi_controller", 00:04:53.809 "vhost_create_scsi_controller", 00:04:53.809 "ublk_recover_disk", 00:04:53.809 "ublk_get_disks", 00:04:53.809 "ublk_stop_disk", 00:04:53.809 "ublk_start_disk", 00:04:53.809 "ublk_destroy_target", 00:04:53.809 "ublk_create_target", 00:04:53.809 "nbd_get_disks", 00:04:53.809 "nbd_stop_disk", 00:04:53.809 "nbd_start_disk", 00:04:53.809 "env_dpdk_get_mem_stats", 00:04:53.809 "nvmf_stop_mdns_prr", 00:04:53.809 "nvmf_publish_mdns_prr", 00:04:53.809 "nvmf_subsystem_get_listeners", 00:04:53.809 "nvmf_subsystem_get_qpairs", 00:04:53.809 "nvmf_subsystem_get_controllers", 00:04:53.809 "nvmf_get_stats", 00:04:53.809 "nvmf_get_transports", 00:04:53.809 "nvmf_create_transport", 00:04:53.809 "nvmf_get_targets", 00:04:53.809 "nvmf_delete_target", 00:04:53.809 "nvmf_create_target", 00:04:53.809 "nvmf_subsystem_allow_any_host", 00:04:53.809 "nvmf_subsystem_set_keys", 00:04:53.809 "nvmf_subsystem_remove_host", 00:04:53.809 "nvmf_subsystem_add_host", 00:04:53.809 "nvmf_ns_remove_host", 00:04:53.809 "nvmf_ns_add_host", 00:04:53.809 "nvmf_subsystem_remove_ns", 00:04:53.809 "nvmf_subsystem_set_ns_ana_group", 00:04:53.809 "nvmf_subsystem_add_ns", 00:04:53.809 "nvmf_subsystem_listener_set_ana_state", 00:04:53.809 "nvmf_discovery_get_referrals", 00:04:53.809 "nvmf_discovery_remove_referral", 00:04:53.809 "nvmf_discovery_add_referral", 00:04:53.809 "nvmf_subsystem_remove_listener", 00:04:53.809 "nvmf_subsystem_add_listener", 00:04:53.809 "nvmf_delete_subsystem", 00:04:53.809 "nvmf_create_subsystem", 00:04:53.809 "nvmf_get_subsystems", 00:04:53.809 "nvmf_set_crdt", 00:04:53.809 "nvmf_set_config", 00:04:53.809 "nvmf_set_max_subsystems", 00:04:53.809 "iscsi_get_histogram", 00:04:53.809 "iscsi_enable_histogram", 00:04:53.809 "iscsi_set_options", 00:04:53.809 "iscsi_get_auth_groups", 00:04:53.809 "iscsi_auth_group_remove_secret", 00:04:53.809 "iscsi_auth_group_add_secret", 00:04:53.809 "iscsi_delete_auth_group", 00:04:53.809 "iscsi_create_auth_group", 00:04:53.809 "iscsi_set_discovery_auth", 00:04:53.809 "iscsi_get_options", 00:04:53.809 "iscsi_target_node_request_logout", 00:04:53.809 "iscsi_target_node_set_redirect", 00:04:53.809 "iscsi_target_node_set_auth", 00:04:53.809 "iscsi_target_node_add_lun", 00:04:53.809 "iscsi_get_stats", 00:04:53.809 "iscsi_get_connections", 00:04:53.809 "iscsi_portal_group_set_auth", 00:04:53.809 "iscsi_start_portal_group", 00:04:53.809 "iscsi_delete_portal_group", 00:04:53.809 "iscsi_create_portal_group", 00:04:53.809 "iscsi_get_portal_groups", 00:04:53.809 "iscsi_delete_target_node", 00:04:53.809 "iscsi_target_node_remove_pg_ig_maps", 00:04:53.809 "iscsi_target_node_add_pg_ig_maps", 00:04:53.809 "iscsi_create_target_node", 00:04:53.809 "iscsi_get_target_nodes", 00:04:53.809 "iscsi_delete_initiator_group", 00:04:53.809 "iscsi_initiator_group_remove_initiators", 00:04:53.809 "iscsi_initiator_group_add_initiators", 00:04:53.809 "iscsi_create_initiator_group", 00:04:53.809 "iscsi_get_initiator_groups", 00:04:53.809 "fsdev_aio_delete", 00:04:53.809 "fsdev_aio_create", 00:04:53.809 "keyring_linux_set_options", 00:04:53.809 "keyring_file_remove_key", 00:04:53.809 "keyring_file_add_key", 00:04:53.809 "vfu_virtio_create_fs_endpoint", 00:04:53.809 "vfu_virtio_create_scsi_endpoint", 00:04:53.809 "vfu_virtio_scsi_remove_target", 00:04:53.809 "vfu_virtio_scsi_add_target", 00:04:53.809 "vfu_virtio_create_blk_endpoint", 00:04:53.809 "vfu_virtio_delete_endpoint", 00:04:53.809 "iaa_scan_accel_module", 00:04:53.809 "dsa_scan_accel_module", 00:04:53.809 "ioat_scan_accel_module", 00:04:53.809 "accel_error_inject_error", 00:04:53.809 "bdev_iscsi_delete", 00:04:53.809 "bdev_iscsi_create", 00:04:53.809 "bdev_iscsi_set_options", 00:04:53.809 "bdev_virtio_attach_controller", 00:04:53.809 "bdev_virtio_scsi_get_devices", 00:04:53.809 "bdev_virtio_detach_controller", 00:04:53.809 "bdev_virtio_blk_set_hotplug", 00:04:53.809 "bdev_ftl_set_property", 00:04:53.809 "bdev_ftl_get_properties", 00:04:53.809 "bdev_ftl_get_stats", 00:04:53.809 "bdev_ftl_unmap", 00:04:53.809 "bdev_ftl_unload", 00:04:53.809 "bdev_ftl_delete", 00:04:53.809 "bdev_ftl_load", 00:04:53.809 "bdev_ftl_create", 00:04:53.809 "bdev_aio_delete", 00:04:53.809 "bdev_aio_rescan", 00:04:53.809 "bdev_aio_create", 00:04:53.809 "blobfs_create", 00:04:53.809 "blobfs_detect", 00:04:53.809 "blobfs_set_cache_size", 00:04:53.809 "bdev_zone_block_delete", 00:04:53.809 "bdev_zone_block_create", 00:04:53.809 "bdev_delay_delete", 00:04:53.809 "bdev_delay_create", 00:04:53.809 "bdev_delay_update_latency", 00:04:53.809 "bdev_split_delete", 00:04:53.809 "bdev_split_create", 00:04:53.809 "bdev_error_inject_error", 00:04:53.809 "bdev_error_delete", 00:04:53.809 "bdev_error_create", 00:04:53.809 "bdev_raid_set_options", 00:04:53.809 "bdev_raid_remove_base_bdev", 00:04:53.809 "bdev_raid_add_base_bdev", 00:04:53.809 "bdev_raid_delete", 00:04:53.809 "bdev_raid_create", 00:04:53.809 "bdev_raid_get_bdevs", 00:04:53.809 "bdev_lvol_set_parent_bdev", 00:04:53.809 "bdev_lvol_set_parent", 00:04:53.809 "bdev_lvol_check_shallow_copy", 00:04:53.809 "bdev_lvol_start_shallow_copy", 00:04:53.809 "bdev_lvol_grow_lvstore", 00:04:53.809 "bdev_lvol_get_lvols", 00:04:53.809 "bdev_lvol_get_lvstores", 00:04:53.809 "bdev_lvol_delete", 00:04:53.809 "bdev_lvol_set_read_only", 00:04:53.809 "bdev_lvol_resize", 00:04:53.809 "bdev_lvol_decouple_parent", 00:04:53.809 "bdev_lvol_inflate", 00:04:53.809 "bdev_lvol_rename", 00:04:53.809 "bdev_lvol_clone_bdev", 00:04:53.809 "bdev_lvol_clone", 00:04:53.809 "bdev_lvol_snapshot", 00:04:53.809 "bdev_lvol_create", 00:04:53.809 "bdev_lvol_delete_lvstore", 00:04:53.809 "bdev_lvol_rename_lvstore", 00:04:53.809 "bdev_lvol_create_lvstore", 00:04:53.809 "bdev_passthru_delete", 00:04:53.809 "bdev_passthru_create", 00:04:53.809 "bdev_nvme_cuse_unregister", 00:04:53.809 "bdev_nvme_cuse_register", 00:04:53.809 "bdev_opal_new_user", 00:04:53.809 "bdev_opal_set_lock_state", 00:04:53.809 "bdev_opal_delete", 00:04:53.809 "bdev_opal_get_info", 00:04:53.809 "bdev_opal_create", 00:04:53.809 "bdev_nvme_opal_revert", 00:04:53.809 "bdev_nvme_opal_init", 00:04:53.809 "bdev_nvme_send_cmd", 00:04:53.809 "bdev_nvme_set_keys", 00:04:53.809 "bdev_nvme_get_path_iostat", 00:04:53.809 "bdev_nvme_get_mdns_discovery_info", 00:04:53.809 "bdev_nvme_stop_mdns_discovery", 00:04:53.809 "bdev_nvme_start_mdns_discovery", 00:04:53.809 "bdev_nvme_set_multipath_policy", 00:04:53.809 "bdev_nvme_set_preferred_path", 00:04:53.809 "bdev_nvme_get_io_paths", 00:04:53.809 "bdev_nvme_remove_error_injection", 00:04:53.809 "bdev_nvme_add_error_injection", 00:04:53.809 "bdev_nvme_get_discovery_info", 00:04:53.809 "bdev_nvme_stop_discovery", 00:04:53.809 "bdev_nvme_start_discovery", 00:04:53.809 "bdev_nvme_get_controller_health_info", 00:04:53.809 "bdev_nvme_disable_controller", 00:04:53.809 "bdev_nvme_enable_controller", 00:04:53.809 "bdev_nvme_reset_controller", 00:04:53.809 "bdev_nvme_get_transport_statistics", 00:04:53.809 "bdev_nvme_apply_firmware", 00:04:53.810 "bdev_nvme_detach_controller", 00:04:53.810 "bdev_nvme_get_controllers", 00:04:53.810 "bdev_nvme_attach_controller", 00:04:53.810 "bdev_nvme_set_hotplug", 00:04:53.810 "bdev_nvme_set_options", 00:04:53.810 "bdev_null_resize", 00:04:53.810 "bdev_null_delete", 00:04:53.810 "bdev_null_create", 00:04:53.810 "bdev_malloc_delete", 00:04:53.810 "bdev_malloc_create" 00:04:53.810 ] 00:04:54.069 13:13:01 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:54.069 13:13:01 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:54.069 13:13:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.069 13:13:01 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:54.069 13:13:01 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3830358 00:04:54.069 13:13:01 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 3830358 ']' 00:04:54.069 13:13:01 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 3830358 00:04:54.069 13:13:01 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:54.069 13:13:01 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.069 13:13:01 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3830358 00:04:54.069 13:13:01 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.069 13:13:01 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.069 13:13:01 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3830358' 00:04:54.069 killing process with pid 3830358 00:04:54.069 13:13:01 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 3830358 00:04:54.069 13:13:01 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 3830358 00:04:54.328 00:04:54.328 real 0m1.137s 00:04:54.328 user 0m1.881s 00:04:54.328 sys 0m0.506s 00:04:54.328 13:13:02 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.328 13:13:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.328 ************************************ 00:04:54.328 END TEST spdkcli_tcp 00:04:54.328 ************************************ 00:04:54.328 13:13:02 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:54.328 13:13:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.328 13:13:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.328 13:13:02 -- common/autotest_common.sh@10 -- # set +x 00:04:54.328 ************************************ 00:04:54.328 START TEST dpdk_mem_utility 00:04:54.328 ************************************ 00:04:54.328 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:54.588 * Looking for test storage... 00:04:54.588 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:04:54.588 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:54.588 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:54.588 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:54.588 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.588 13:13:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:54.588 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.588 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:54.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.588 --rc genhtml_branch_coverage=1 00:04:54.588 --rc genhtml_function_coverage=1 00:04:54.588 --rc genhtml_legend=1 00:04:54.588 --rc geninfo_all_blocks=1 00:04:54.588 --rc geninfo_unexecuted_blocks=1 00:04:54.588 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:54.588 ' 00:04:54.588 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:54.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.588 --rc genhtml_branch_coverage=1 00:04:54.588 --rc genhtml_function_coverage=1 00:04:54.588 --rc genhtml_legend=1 00:04:54.588 --rc geninfo_all_blocks=1 00:04:54.588 --rc geninfo_unexecuted_blocks=1 00:04:54.588 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:54.588 ' 00:04:54.588 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:54.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.588 --rc genhtml_branch_coverage=1 00:04:54.588 --rc genhtml_function_coverage=1 00:04:54.588 --rc genhtml_legend=1 00:04:54.588 --rc geninfo_all_blocks=1 00:04:54.588 --rc geninfo_unexecuted_blocks=1 00:04:54.588 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:54.588 ' 00:04:54.588 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:54.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.588 --rc genhtml_branch_coverage=1 00:04:54.588 --rc genhtml_function_coverage=1 00:04:54.588 --rc genhtml_legend=1 00:04:54.588 --rc geninfo_all_blocks=1 00:04:54.588 --rc geninfo_unexecuted_blocks=1 00:04:54.588 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:54.588 ' 00:04:54.588 13:13:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:54.588 13:13:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3830694 00:04:54.588 13:13:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3830694 00:04:54.588 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 3830694 ']' 00:04:54.588 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.588 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.588 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.588 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.588 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:54.588 13:13:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.588 [2024-10-17 13:13:02.523291] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:04:54.588 [2024-10-17 13:13:02.523381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3830694 ] 00:04:54.588 [2024-10-17 13:13:02.589787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.588 [2024-10-17 13:13:02.629105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.847 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.847 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:54.847 13:13:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:54.847 13:13:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:54.847 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.847 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:54.847 { 00:04:54.847 "filename": "/tmp/spdk_mem_dump.txt" 00:04:54.847 } 00:04:54.847 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.847 13:13:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:54.847 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:54.847 1 heaps totaling size 810.000000 MiB 00:04:54.847 size: 810.000000 MiB heap id: 0 00:04:54.847 end heaps---------- 00:04:54.847 9 mempools totaling size 595.772034 MiB 00:04:54.847 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:54.847 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:54.847 size: 92.545471 MiB name: bdev_io_3830694 00:04:54.847 size: 50.003479 MiB name: msgpool_3830694 00:04:54.847 size: 36.509338 MiB name: fsdev_io_3830694 00:04:54.847 size: 21.763794 MiB name: PDU_Pool 00:04:54.847 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:54.847 size: 4.133484 MiB name: evtpool_3830694 00:04:54.847 size: 0.026123 MiB name: Session_Pool 00:04:54.847 end mempools------- 00:04:54.847 6 memzones totaling size 4.142822 MiB 00:04:54.847 size: 1.000366 MiB name: RG_ring_0_3830694 00:04:54.847 size: 1.000366 MiB name: RG_ring_1_3830694 00:04:54.847 size: 1.000366 MiB name: RG_ring_4_3830694 00:04:54.847 size: 1.000366 MiB name: RG_ring_5_3830694 00:04:54.847 size: 0.125366 MiB name: RG_ring_2_3830694 00:04:54.847 size: 0.015991 MiB name: RG_ring_3_3830694 00:04:54.847 end memzones------- 00:04:54.847 13:13:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:55.107 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:55.107 list of free elements. size: 10.862488 MiB 00:04:55.107 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:55.107 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:55.108 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:55.108 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:55.108 element at address: 0x200008000000 with size: 0.959839 MiB 00:04:55.108 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:55.108 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:55.108 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:55.108 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:55.108 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:55.108 element at address: 0x200003e00000 with size: 0.490723 MiB 00:04:55.108 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:55.108 element at address: 0x200010600000 with size: 0.481934 MiB 00:04:55.108 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:55.108 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:55.108 list of standard malloc elements. size: 199.218628 MiB 00:04:55.108 element at address: 0x2000081fff80 with size: 132.000122 MiB 00:04:55.108 element at address: 0x200003ffff80 with size: 64.000122 MiB 00:04:55.108 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:55.108 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:55.108 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:55.108 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:55.108 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:55.108 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:55.108 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:55.108 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:55.108 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:55.108 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:55.108 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:55.108 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:55.108 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:55.108 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:55.108 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:55.108 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:55.108 element at address: 0x20000085b100 with size: 0.000183 MiB 00:04:55.108 element at address: 0x2000008db3c0 with size: 0.000183 MiB 00:04:55.108 element at address: 0x2000008db5c0 with size: 0.000183 MiB 00:04:55.108 element at address: 0x2000008df880 with size: 0.000183 MiB 00:04:55.108 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:55.108 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:55.108 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:55.108 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:55.108 element at address: 0x200003e7da00 with size: 0.000183 MiB 00:04:55.108 element at address: 0x200003e7dac0 with size: 0.000183 MiB 00:04:55.108 element at address: 0x200003efdd80 with size: 0.000183 MiB 00:04:55.108 element at address: 0x2000080fdd80 with size: 0.000183 MiB 00:04:55.108 element at address: 0x20001067b600 with size: 0.000183 MiB 00:04:55.108 element at address: 0x20001067b6c0 with size: 0.000183 MiB 00:04:55.108 element at address: 0x2000106fb980 with size: 0.000183 MiB 00:04:55.108 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:55.108 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:55.108 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:55.108 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:55.108 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:55.108 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:55.108 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:55.108 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:55.108 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:55.108 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:55.108 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:55.108 list of memzone associated elements. size: 599.918884 MiB 00:04:55.108 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:55.108 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:55.108 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:55.108 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:55.108 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:55.108 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3830694_0 00:04:55.108 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:55.108 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3830694_0 00:04:55.108 element at address: 0x2000107fdb80 with size: 36.008911 MiB 00:04:55.108 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3830694_0 00:04:55.108 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:55.108 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:55.108 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:55.108 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:55.108 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:55.108 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3830694_0 00:04:55.108 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:55.108 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3830694 00:04:55.108 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:55.108 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3830694 00:04:55.108 element at address: 0x2000106fba40 with size: 1.008118 MiB 00:04:55.108 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:55.108 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:55.108 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:55.108 element at address: 0x2000080fde40 with size: 1.008118 MiB 00:04:55.108 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:55.108 element at address: 0x200003efde40 with size: 1.008118 MiB 00:04:55.108 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:55.108 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:55.108 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3830694 00:04:55.108 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:55.108 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3830694 00:04:55.108 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:55.108 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3830694 00:04:55.108 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:55.108 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3830694 00:04:55.108 element at address: 0x20000085b1c0 with size: 0.500488 MiB 00:04:55.108 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3830694 00:04:55.108 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:55.108 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3830694 00:04:55.108 element at address: 0x20001067b780 with size: 0.500488 MiB 00:04:55.108 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:55.108 element at address: 0x200003e7db80 with size: 0.500488 MiB 00:04:55.108 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:55.108 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:55.108 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:55.108 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:55.108 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3830694 00:04:55.108 element at address: 0x2000008df940 with size: 0.125488 MiB 00:04:55.108 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3830694 00:04:55.108 element at address: 0x2000080f5b80 with size: 0.031738 MiB 00:04:55.108 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:55.108 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:55.108 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:55.108 element at address: 0x2000008db680 with size: 0.016113 MiB 00:04:55.108 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3830694 00:04:55.108 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:55.108 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:55.108 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:55.108 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3830694 00:04:55.108 element at address: 0x2000008db480 with size: 0.000305 MiB 00:04:55.108 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3830694 00:04:55.108 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:55.108 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3830694 00:04:55.108 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:55.108 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:55.108 13:13:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:55.108 13:13:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3830694 00:04:55.108 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 3830694 ']' 00:04:55.108 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 3830694 00:04:55.108 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:55.108 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:55.108 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3830694 00:04:55.108 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:55.108 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:55.108 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3830694' 00:04:55.108 killing process with pid 3830694 00:04:55.108 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 3830694 00:04:55.108 13:13:02 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 3830694 00:04:55.368 00:04:55.368 real 0m0.937s 00:04:55.368 user 0m0.852s 00:04:55.368 sys 0m0.416s 00:04:55.368 13:13:03 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.368 13:13:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:55.368 ************************************ 00:04:55.368 END TEST dpdk_mem_utility 00:04:55.368 ************************************ 00:04:55.368 13:13:03 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:04:55.368 13:13:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.368 13:13:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.368 13:13:03 -- common/autotest_common.sh@10 -- # set +x 00:04:55.368 ************************************ 00:04:55.368 START TEST event 00:04:55.368 ************************************ 00:04:55.368 13:13:03 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:04:55.628 * Looking for test storage... 00:04:55.628 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:04:55.628 13:13:03 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:55.628 13:13:03 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:55.628 13:13:03 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:55.628 13:13:03 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:55.628 13:13:03 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.628 13:13:03 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.628 13:13:03 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.628 13:13:03 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.628 13:13:03 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.628 13:13:03 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.628 13:13:03 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.628 13:13:03 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.628 13:13:03 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.628 13:13:03 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.628 13:13:03 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.628 13:13:03 event -- scripts/common.sh@344 -- # case "$op" in 00:04:55.628 13:13:03 event -- scripts/common.sh@345 -- # : 1 00:04:55.628 13:13:03 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.628 13:13:03 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.628 13:13:03 event -- scripts/common.sh@365 -- # decimal 1 00:04:55.628 13:13:03 event -- scripts/common.sh@353 -- # local d=1 00:04:55.628 13:13:03 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.628 13:13:03 event -- scripts/common.sh@355 -- # echo 1 00:04:55.628 13:13:03 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.628 13:13:03 event -- scripts/common.sh@366 -- # decimal 2 00:04:55.628 13:13:03 event -- scripts/common.sh@353 -- # local d=2 00:04:55.628 13:13:03 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.628 13:13:03 event -- scripts/common.sh@355 -- # echo 2 00:04:55.628 13:13:03 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.628 13:13:03 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.628 13:13:03 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.628 13:13:03 event -- scripts/common.sh@368 -- # return 0 00:04:55.628 13:13:03 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.628 13:13:03 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:55.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.628 --rc genhtml_branch_coverage=1 00:04:55.628 --rc genhtml_function_coverage=1 00:04:55.628 --rc genhtml_legend=1 00:04:55.628 --rc geninfo_all_blocks=1 00:04:55.628 --rc geninfo_unexecuted_blocks=1 00:04:55.628 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:55.628 ' 00:04:55.628 13:13:03 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:55.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.628 --rc genhtml_branch_coverage=1 00:04:55.628 --rc genhtml_function_coverage=1 00:04:55.628 --rc genhtml_legend=1 00:04:55.628 --rc geninfo_all_blocks=1 00:04:55.628 --rc geninfo_unexecuted_blocks=1 00:04:55.628 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:55.628 ' 00:04:55.628 13:13:03 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:55.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.628 --rc genhtml_branch_coverage=1 00:04:55.628 --rc genhtml_function_coverage=1 00:04:55.628 --rc genhtml_legend=1 00:04:55.628 --rc geninfo_all_blocks=1 00:04:55.628 --rc geninfo_unexecuted_blocks=1 00:04:55.628 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:55.628 ' 00:04:55.628 13:13:03 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:55.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.628 --rc genhtml_branch_coverage=1 00:04:55.629 --rc genhtml_function_coverage=1 00:04:55.629 --rc genhtml_legend=1 00:04:55.629 --rc geninfo_all_blocks=1 00:04:55.629 --rc geninfo_unexecuted_blocks=1 00:04:55.629 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:55.629 ' 00:04:55.629 13:13:03 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:55.629 13:13:03 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:55.629 13:13:03 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:55.629 13:13:03 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:55.629 13:13:03 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.629 13:13:03 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.629 ************************************ 00:04:55.629 START TEST event_perf 00:04:55.629 ************************************ 00:04:55.629 13:13:03 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:55.629 Running I/O for 1 seconds...[2024-10-17 13:13:03.589199] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:04:55.629 [2024-10-17 13:13:03.589282] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3830804 ] 00:04:55.629 [2024-10-17 13:13:03.660547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:55.888 [2024-10-17 13:13:03.705784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.888 [2024-10-17 13:13:03.705881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.888 [2024-10-17 13:13:03.705955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:55.888 [2024-10-17 13:13:03.705957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.827 Running I/O for 1 seconds... 00:04:56.827 lcore 0: 190598 00:04:56.827 lcore 1: 190599 00:04:56.827 lcore 2: 190601 00:04:56.827 lcore 3: 190599 00:04:56.827 done. 00:04:56.827 00:04:56.827 real 0m1.172s 00:04:56.827 user 0m4.081s 00:04:56.827 sys 0m0.088s 00:04:56.827 13:13:04 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.827 13:13:04 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:56.827 ************************************ 00:04:56.827 END TEST event_perf 00:04:56.827 ************************************ 00:04:56.827 13:13:04 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:56.827 13:13:04 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:56.827 13:13:04 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.827 13:13:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.827 ************************************ 00:04:56.827 START TEST event_reactor 00:04:56.827 ************************************ 00:04:56.827 13:13:04 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:56.827 [2024-10-17 13:13:04.840972] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:04:56.827 [2024-10-17 13:13:04.841053] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3831060 ] 00:04:57.087 [2024-10-17 13:13:04.912373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.087 [2024-10-17 13:13:04.951315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.025 test_start 00:04:58.025 oneshot 00:04:58.025 tick 100 00:04:58.025 tick 100 00:04:58.025 tick 250 00:04:58.025 tick 100 00:04:58.025 tick 100 00:04:58.025 tick 100 00:04:58.025 tick 250 00:04:58.025 tick 500 00:04:58.025 tick 100 00:04:58.025 tick 100 00:04:58.025 tick 250 00:04:58.025 tick 100 00:04:58.025 tick 100 00:04:58.025 test_end 00:04:58.025 00:04:58.025 real 0m1.165s 00:04:58.025 user 0m1.081s 00:04:58.025 sys 0m0.079s 00:04:58.025 13:13:05 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.025 13:13:05 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:58.025 ************************************ 00:04:58.025 END TEST event_reactor 00:04:58.025 ************************************ 00:04:58.025 13:13:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:58.025 13:13:06 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:58.025 13:13:06 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.025 13:13:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.025 ************************************ 00:04:58.025 START TEST event_reactor_perf 00:04:58.025 ************************************ 00:04:58.025 13:13:06 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:58.284 [2024-10-17 13:13:06.085801] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:04:58.284 [2024-10-17 13:13:06.085882] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3831346 ] 00:04:58.284 [2024-10-17 13:13:06.158418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.284 [2024-10-17 13:13:06.197043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.222 test_start 00:04:59.222 test_end 00:04:59.222 Performance: 955377 events per second 00:04:59.222 00:04:59.222 real 0m1.168s 00:04:59.222 user 0m1.089s 00:04:59.222 sys 0m0.075s 00:04:59.222 13:13:07 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.222 13:13:07 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:59.222 ************************************ 00:04:59.222 END TEST event_reactor_perf 00:04:59.222 ************************************ 00:04:59.481 13:13:07 event -- event/event.sh@49 -- # uname -s 00:04:59.481 13:13:07 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:59.481 13:13:07 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:59.481 13:13:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.481 13:13:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.481 13:13:07 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.481 ************************************ 00:04:59.481 START TEST event_scheduler 00:04:59.481 ************************************ 00:04:59.481 13:13:07 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:59.481 * Looking for test storage... 00:04:59.481 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:04:59.481 13:13:07 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:59.481 13:13:07 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:59.481 13:13:07 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:59.481 13:13:07 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.481 13:13:07 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:59.481 13:13:07 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.481 13:13:07 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:59.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.481 --rc genhtml_branch_coverage=1 00:04:59.481 --rc genhtml_function_coverage=1 00:04:59.481 --rc genhtml_legend=1 00:04:59.481 --rc geninfo_all_blocks=1 00:04:59.481 --rc geninfo_unexecuted_blocks=1 00:04:59.481 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:59.481 ' 00:04:59.481 13:13:07 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:59.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.481 --rc genhtml_branch_coverage=1 00:04:59.481 --rc genhtml_function_coverage=1 00:04:59.481 --rc genhtml_legend=1 00:04:59.481 --rc geninfo_all_blocks=1 00:04:59.481 --rc geninfo_unexecuted_blocks=1 00:04:59.481 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:59.481 ' 00:04:59.481 13:13:07 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:59.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.481 --rc genhtml_branch_coverage=1 00:04:59.481 --rc genhtml_function_coverage=1 00:04:59.481 --rc genhtml_legend=1 00:04:59.481 --rc geninfo_all_blocks=1 00:04:59.481 --rc geninfo_unexecuted_blocks=1 00:04:59.481 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:59.481 ' 00:04:59.481 13:13:07 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:59.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.481 --rc genhtml_branch_coverage=1 00:04:59.481 --rc genhtml_function_coverage=1 00:04:59.481 --rc genhtml_legend=1 00:04:59.481 --rc geninfo_all_blocks=1 00:04:59.481 --rc geninfo_unexecuted_blocks=1 00:04:59.481 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:59.481 ' 00:04:59.481 13:13:07 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:59.481 13:13:07 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3831662 00:04:59.481 13:13:07 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.481 13:13:07 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3831662 00:04:59.481 13:13:07 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 3831662 ']' 00:04:59.481 13:13:07 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.481 13:13:07 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.481 13:13:07 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.482 13:13:07 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.482 13:13:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.482 13:13:07 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:59.482 [2024-10-17 13:13:07.529890] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:04:59.482 [2024-10-17 13:13:07.529979] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3831662 ] 00:04:59.741 [2024-10-17 13:13:07.594108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:59.741 [2024-10-17 13:13:07.640829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.741 [2024-10-17 13:13:07.640915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.741 [2024-10-17 13:13:07.641000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.741 [2024-10-17 13:13:07.641002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.741 13:13:07 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.741 13:13:07 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:59.741 13:13:07 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:59.741 13:13:07 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.741 13:13:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.741 [2024-10-17 13:13:07.701631] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:59.741 [2024-10-17 13:13:07.701652] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:59.741 [2024-10-17 13:13:07.701662] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:59.741 [2024-10-17 13:13:07.701670] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:59.741 [2024-10-17 13:13:07.701677] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:59.741 13:13:07 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.741 13:13:07 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:59.741 13:13:07 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.741 13:13:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.741 [2024-10-17 13:13:07.774109] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:59.742 13:13:07 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.742 13:13:07 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:59.742 13:13:07 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.742 13:13:07 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.742 13:13:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.001 ************************************ 00:05:00.001 START TEST scheduler_create_thread 00:05:00.001 ************************************ 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.001 2 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.001 3 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.001 4 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.001 5 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.001 6 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.001 7 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.001 8 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.001 9 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.001 10 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.001 13:13:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.978 13:13:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.978 13:13:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:00.978 13:13:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.978 13:13:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.358 13:13:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.358 13:13:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:02.359 13:13:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:02.359 13:13:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.359 13:13:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.295 13:13:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.295 00:05:03.295 real 0m3.382s 00:05:03.295 user 0m0.022s 00:05:03.295 sys 0m0.010s 00:05:03.295 13:13:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.295 13:13:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.295 ************************************ 00:05:03.295 END TEST scheduler_create_thread 00:05:03.295 ************************************ 00:05:03.295 13:13:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:03.295 13:13:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3831662 00:05:03.295 13:13:11 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 3831662 ']' 00:05:03.295 13:13:11 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 3831662 00:05:03.295 13:13:11 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:03.295 13:13:11 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:03.295 13:13:11 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3831662 00:05:03.296 13:13:11 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:03.296 13:13:11 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:03.296 13:13:11 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3831662' 00:05:03.296 killing process with pid 3831662 00:05:03.296 13:13:11 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 3831662 00:05:03.296 13:13:11 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 3831662 00:05:03.555 [2024-10-17 13:13:11.574216] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:03.814 00:05:03.814 real 0m4.450s 00:05:03.814 user 0m7.817s 00:05:03.814 sys 0m0.401s 00:05:03.814 13:13:11 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.814 13:13:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.814 ************************************ 00:05:03.814 END TEST event_scheduler 00:05:03.814 ************************************ 00:05:03.814 13:13:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:03.814 13:13:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:03.814 13:13:11 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.814 13:13:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.814 13:13:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.814 ************************************ 00:05:03.814 START TEST app_repeat 00:05:03.814 ************************************ 00:05:03.814 13:13:11 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:03.814 13:13:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.814 13:13:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.814 13:13:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:03.814 13:13:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.814 13:13:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:03.814 13:13:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:03.814 13:13:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:03.814 13:13:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3832514 00:05:03.814 13:13:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.814 13:13:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3832514' 00:05:03.814 Process app_repeat pid: 3832514 00:05:03.814 13:13:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:03.814 13:13:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:03.814 spdk_app_start Round 0 00:05:03.814 13:13:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3832514 /var/tmp/spdk-nbd.sock 00:05:03.814 13:13:11 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3832514 ']' 00:05:03.814 13:13:11 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.814 13:13:11 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.814 13:13:11 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.814 13:13:11 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.814 13:13:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.814 13:13:11 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:04.073 [2024-10-17 13:13:11.883443] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:04.073 [2024-10-17 13:13:11.883525] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3832514 ] 00:05:04.073 [2024-10-17 13:13:11.952965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.073 [2024-10-17 13:13:11.994403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.073 [2024-10-17 13:13:11.994406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.073 13:13:12 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.073 13:13:12 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:04.073 13:13:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.332 Malloc0 00:05:04.332 13:13:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.591 Malloc1 00:05:04.591 13:13:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.591 13:13:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.591 13:13:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.591 13:13:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.591 13:13:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.591 13:13:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.591 13:13:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.591 13:13:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.591 13:13:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.591 13:13:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.591 13:13:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.591 13:13:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.591 13:13:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.591 13:13:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.591 13:13:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.591 13:13:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.850 /dev/nbd0 00:05:04.850 13:13:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.850 13:13:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.850 13:13:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:04.850 13:13:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:04.850 13:13:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:04.850 13:13:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:04.850 13:13:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:04.850 13:13:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:04.850 13:13:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:04.850 13:13:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:04.850 13:13:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.850 1+0 records in 00:05:04.850 1+0 records out 00:05:04.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265145 s, 15.4 MB/s 00:05:04.850 13:13:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:04.850 13:13:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:04.850 13:13:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:04.850 13:13:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:04.850 13:13:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:04.850 13:13:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.850 13:13:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.850 13:13:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.109 /dev/nbd1 00:05:05.109 13:13:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.109 13:13:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.109 13:13:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:05.109 13:13:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:05.109 13:13:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:05.109 13:13:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:05.109 13:13:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:05.109 13:13:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:05.109 13:13:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:05.109 13:13:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:05.109 13:13:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.109 1+0 records in 00:05:05.109 1+0 records out 00:05:05.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246635 s, 16.6 MB/s 00:05:05.109 13:13:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:05.109 13:13:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:05.109 13:13:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:05.109 13:13:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:05.109 13:13:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:05.109 13:13:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.109 13:13:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.109 13:13:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.109 13:13:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.109 13:13:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.368 { 00:05:05.368 "nbd_device": "/dev/nbd0", 00:05:05.368 "bdev_name": "Malloc0" 00:05:05.368 }, 00:05:05.368 { 00:05:05.368 "nbd_device": "/dev/nbd1", 00:05:05.368 "bdev_name": "Malloc1" 00:05:05.368 } 00:05:05.368 ]' 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.368 { 00:05:05.368 "nbd_device": "/dev/nbd0", 00:05:05.368 "bdev_name": "Malloc0" 00:05:05.368 }, 00:05:05.368 { 00:05:05.368 "nbd_device": "/dev/nbd1", 00:05:05.368 "bdev_name": "Malloc1" 00:05:05.368 } 00:05:05.368 ]' 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.368 /dev/nbd1' 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.368 /dev/nbd1' 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.368 256+0 records in 00:05:05.368 256+0 records out 00:05:05.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111044 s, 94.4 MB/s 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.368 256+0 records in 00:05:05.368 256+0 records out 00:05:05.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199623 s, 52.5 MB/s 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.368 256+0 records in 00:05:05.368 256+0 records out 00:05:05.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214704 s, 48.8 MB/s 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.368 13:13:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.369 13:13:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.369 13:13:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.369 13:13:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.369 13:13:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:05.369 13:13:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.369 13:13:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.627 13:13:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.627 13:13:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.627 13:13:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.627 13:13:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.627 13:13:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.627 13:13:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.627 13:13:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.627 13:13:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.627 13:13:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.627 13:13:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:05.886 13:13:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:05.886 13:13:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:05.886 13:13:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:05.886 13:13:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.886 13:13:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.886 13:13:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:05.886 13:13:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.886 13:13:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.886 13:13:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.886 13:13:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.886 13:13:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.886 13:13:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:05.886 13:13:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:05.886 13:13:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.145 13:13:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.145 13:13:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.145 13:13:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.145 13:13:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:06.145 13:13:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.145 13:13:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.145 13:13:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.145 13:13:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.145 13:13:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.145 13:13:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.145 13:13:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:06.404 [2024-10-17 13:13:14.317423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.404 [2024-10-17 13:13:14.353636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.404 [2024-10-17 13:13:14.353639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.404 [2024-10-17 13:13:14.393755] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.404 [2024-10-17 13:13:14.393799] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.690 13:13:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.690 13:13:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:09.690 spdk_app_start Round 1 00:05:09.690 13:13:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3832514 /var/tmp/spdk-nbd.sock 00:05:09.690 13:13:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3832514 ']' 00:05:09.690 13:13:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.690 13:13:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.690 13:13:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.690 13:13:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.690 13:13:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.690 13:13:17 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.690 13:13:17 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:09.690 13:13:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.690 Malloc0 00:05:09.690 13:13:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.690 Malloc1 00:05:09.949 13:13:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.949 13:13:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.949 13:13:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.949 13:13:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.949 13:13:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.949 13:13:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.949 13:13:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.949 13:13:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.949 13:13:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.949 13:13:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:09.949 13:13:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.949 13:13:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:09.949 13:13:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:09.949 13:13:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:09.949 13:13:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.949 13:13:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:09.949 /dev/nbd0 00:05:09.949 13:13:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:09.949 13:13:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:09.949 13:13:17 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:09.949 13:13:17 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:09.949 13:13:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:09.949 13:13:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:09.949 13:13:17 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:09.949 13:13:17 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:09.949 13:13:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:09.949 13:13:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:09.949 13:13:17 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.950 1+0 records in 00:05:09.950 1+0 records out 00:05:09.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277241 s, 14.8 MB/s 00:05:09.950 13:13:17 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:09.950 13:13:17 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:09.950 13:13:17 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:10.209 13:13:17 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:10.209 13:13:17 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:10.209 13:13:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.209 13:13:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.209 13:13:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:10.209 /dev/nbd1 00:05:10.209 13:13:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.209 13:13:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.209 13:13:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:10.209 13:13:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:10.209 13:13:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:10.209 13:13:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:10.209 13:13:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:10.209 13:13:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:10.209 13:13:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:10.209 13:13:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:10.209 13:13:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.209 1+0 records in 00:05:10.209 1+0 records out 00:05:10.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260665 s, 15.7 MB/s 00:05:10.209 13:13:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:10.209 13:13:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:10.209 13:13:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:10.209 13:13:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:10.209 13:13:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:10.209 13:13:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.209 13:13:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.209 13:13:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.209 13:13:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.209 13:13:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.468 13:13:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.468 { 00:05:10.468 "nbd_device": "/dev/nbd0", 00:05:10.468 "bdev_name": "Malloc0" 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "nbd_device": "/dev/nbd1", 00:05:10.468 "bdev_name": "Malloc1" 00:05:10.468 } 00:05:10.468 ]' 00:05:10.468 13:13:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.468 { 00:05:10.468 "nbd_device": "/dev/nbd0", 00:05:10.468 "bdev_name": "Malloc0" 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "nbd_device": "/dev/nbd1", 00:05:10.468 "bdev_name": "Malloc1" 00:05:10.468 } 00:05:10.468 ]' 00:05:10.468 13:13:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.468 13:13:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.468 /dev/nbd1' 00:05:10.468 13:13:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.468 /dev/nbd1' 00:05:10.468 13:13:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.468 13:13:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.468 13:13:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.468 13:13:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.468 13:13:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.468 13:13:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.468 13:13:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.468 13:13:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.468 13:13:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.468 13:13:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.468 13:13:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.468 13:13:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.468 256+0 records in 00:05:10.468 256+0 records out 00:05:10.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106897 s, 98.1 MB/s 00:05:10.468 13:13:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.468 13:13:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:10.727 256+0 records in 00:05:10.727 256+0 records out 00:05:10.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201954 s, 51.9 MB/s 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:10.727 256+0 records in 00:05:10.727 256+0 records out 00:05:10.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213088 s, 49.2 MB/s 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.727 13:13:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:10.986 13:13:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:10.986 13:13:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:10.986 13:13:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:10.986 13:13:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.986 13:13:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.986 13:13:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:10.986 13:13:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.986 13:13:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.986 13:13:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.986 13:13:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:10.986 13:13:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:10.986 13:13:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:10.986 13:13:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:10.986 13:13:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.986 13:13:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.986 13:13:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:10.986 13:13:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.986 13:13:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.986 13:13:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.986 13:13:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.986 13:13:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.246 13:13:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:11.246 13:13:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:11.246 13:13:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.246 13:13:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:11.246 13:13:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:11.246 13:13:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.246 13:13:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:11.246 13:13:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:11.246 13:13:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:11.246 13:13:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:11.246 13:13:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:11.246 13:13:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:11.246 13:13:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:11.504 13:13:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:11.763 [2024-10-17 13:13:19.607240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.763 [2024-10-17 13:13:19.643075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.763 [2024-10-17 13:13:19.643078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.763 [2024-10-17 13:13:19.683903] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:11.763 [2024-10-17 13:13:19.683947] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.052 13:13:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:15.052 13:13:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:15.052 spdk_app_start Round 2 00:05:15.052 13:13:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3832514 /var/tmp/spdk-nbd.sock 00:05:15.052 13:13:22 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3832514 ']' 00:05:15.052 13:13:22 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.052 13:13:22 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.052 13:13:22 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.052 13:13:22 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.052 13:13:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.052 13:13:22 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.052 13:13:22 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:15.052 13:13:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.052 Malloc0 00:05:15.052 13:13:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.052 Malloc1 00:05:15.052 13:13:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.052 13:13:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.052 13:13:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.052 13:13:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.052 13:13:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.052 13:13:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.052 13:13:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.052 13:13:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.052 13:13:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.052 13:13:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.052 13:13:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.052 13:13:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.052 13:13:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:15.052 13:13:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.052 13:13:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.052 13:13:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.311 /dev/nbd0 00:05:15.311 13:13:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.311 13:13:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.311 13:13:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:15.311 13:13:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:15.311 13:13:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:15.311 13:13:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:15.311 13:13:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:15.311 13:13:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:15.311 13:13:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:15.311 13:13:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:15.311 13:13:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.311 1+0 records in 00:05:15.311 1+0 records out 00:05:15.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000122379 s, 33.5 MB/s 00:05:15.311 13:13:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:15.311 13:13:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:15.311 13:13:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:15.311 13:13:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:15.311 13:13:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:15.311 13:13:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.311 13:13:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.311 13:13:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.570 /dev/nbd1 00:05:15.570 13:13:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.570 13:13:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.570 13:13:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:15.570 13:13:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:15.570 13:13:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:15.570 13:13:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:15.570 13:13:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:15.570 13:13:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:15.570 13:13:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:15.570 13:13:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:15.570 13:13:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.570 1+0 records in 00:05:15.570 1+0 records out 00:05:15.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373394 s, 11.0 MB/s 00:05:15.570 13:13:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:15.570 13:13:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:15.570 13:13:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:15.570 13:13:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:15.570 13:13:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:15.570 13:13:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.570 13:13:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.570 13:13:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.570 13:13:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.570 13:13:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.829 { 00:05:15.829 "nbd_device": "/dev/nbd0", 00:05:15.829 "bdev_name": "Malloc0" 00:05:15.829 }, 00:05:15.829 { 00:05:15.829 "nbd_device": "/dev/nbd1", 00:05:15.829 "bdev_name": "Malloc1" 00:05:15.829 } 00:05:15.829 ]' 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.829 { 00:05:15.829 "nbd_device": "/dev/nbd0", 00:05:15.829 "bdev_name": "Malloc0" 00:05:15.829 }, 00:05:15.829 { 00:05:15.829 "nbd_device": "/dev/nbd1", 00:05:15.829 "bdev_name": "Malloc1" 00:05:15.829 } 00:05:15.829 ]' 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.829 /dev/nbd1' 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.829 /dev/nbd1' 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.829 256+0 records in 00:05:15.829 256+0 records out 00:05:15.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108604 s, 96.6 MB/s 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.829 256+0 records in 00:05:15.829 256+0 records out 00:05:15.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201111 s, 52.1 MB/s 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.829 256+0 records in 00:05:15.829 256+0 records out 00:05:15.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215321 s, 48.7 MB/s 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.829 13:13:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.830 13:13:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.830 13:13:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.830 13:13:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.830 13:13:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:15.830 13:13:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.830 13:13:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.089 13:13:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.089 13:13:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.089 13:13:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.089 13:13:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.089 13:13:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.089 13:13:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.089 13:13:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.089 13:13:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.089 13:13:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.089 13:13:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.347 13:13:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.347 13:13:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.347 13:13:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.347 13:13:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.347 13:13:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.347 13:13:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.347 13:13:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.347 13:13:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.347 13:13:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.347 13:13:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.347 13:13:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.607 13:13:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.607 13:13:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.607 13:13:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.607 13:13:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.607 13:13:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.607 13:13:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.607 13:13:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.607 13:13:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.607 13:13:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.607 13:13:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.607 13:13:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.607 13:13:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.607 13:13:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:16.866 13:13:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:16.866 [2024-10-17 13:13:24.848311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.866 [2024-10-17 13:13:24.883860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.866 [2024-10-17 13:13:24.883863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.125 [2024-10-17 13:13:24.924040] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.125 [2024-10-17 13:13:24.924077] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.659 13:13:27 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3832514 /var/tmp/spdk-nbd.sock 00:05:19.659 13:13:27 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3832514 ']' 00:05:19.659 13:13:27 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.659 13:13:27 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.659 13:13:27 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.659 13:13:27 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.659 13:13:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.918 13:13:27 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.918 13:13:27 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:19.918 13:13:27 event.app_repeat -- event/event.sh@39 -- # killprocess 3832514 00:05:19.918 13:13:27 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 3832514 ']' 00:05:19.918 13:13:27 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 3832514 00:05:19.918 13:13:27 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:19.918 13:13:27 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.918 13:13:27 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3832514 00:05:19.918 13:13:27 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:19.918 13:13:27 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:19.918 13:13:27 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3832514' 00:05:19.918 killing process with pid 3832514 00:05:19.918 13:13:27 event.app_repeat -- common/autotest_common.sh@969 -- # kill 3832514 00:05:19.918 13:13:27 event.app_repeat -- common/autotest_common.sh@974 -- # wait 3832514 00:05:20.177 spdk_app_start is called in Round 0. 00:05:20.177 Shutdown signal received, stop current app iteration 00:05:20.177 Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 reinitialization... 00:05:20.177 spdk_app_start is called in Round 1. 00:05:20.177 Shutdown signal received, stop current app iteration 00:05:20.177 Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 reinitialization... 00:05:20.177 spdk_app_start is called in Round 2. 00:05:20.177 Shutdown signal received, stop current app iteration 00:05:20.177 Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 reinitialization... 00:05:20.177 spdk_app_start is called in Round 3. 00:05:20.177 Shutdown signal received, stop current app iteration 00:05:20.177 13:13:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:20.177 13:13:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:20.177 00:05:20.177 real 0m16.221s 00:05:20.177 user 0m34.871s 00:05:20.177 sys 0m3.205s 00:05:20.177 13:13:28 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.177 13:13:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.177 ************************************ 00:05:20.177 END TEST app_repeat 00:05:20.177 ************************************ 00:05:20.177 13:13:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:20.177 13:13:28 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:20.177 13:13:28 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.177 13:13:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.177 13:13:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.177 ************************************ 00:05:20.177 START TEST cpu_locks 00:05:20.177 ************************************ 00:05:20.177 13:13:28 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:20.436 * Looking for test storage... 00:05:20.436 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:20.436 13:13:28 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:20.436 13:13:28 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:20.436 13:13:28 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:20.436 13:13:28 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:20.436 13:13:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.437 13:13:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:20.437 13:13:28 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.437 13:13:28 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.437 13:13:28 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.437 13:13:28 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:20.437 13:13:28 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.437 13:13:28 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:20.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.437 --rc genhtml_branch_coverage=1 00:05:20.437 --rc genhtml_function_coverage=1 00:05:20.437 --rc genhtml_legend=1 00:05:20.437 --rc geninfo_all_blocks=1 00:05:20.437 --rc geninfo_unexecuted_blocks=1 00:05:20.437 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:20.437 ' 00:05:20.437 13:13:28 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:20.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.437 --rc genhtml_branch_coverage=1 00:05:20.437 --rc genhtml_function_coverage=1 00:05:20.437 --rc genhtml_legend=1 00:05:20.437 --rc geninfo_all_blocks=1 00:05:20.437 --rc geninfo_unexecuted_blocks=1 00:05:20.437 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:20.437 ' 00:05:20.437 13:13:28 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:20.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.437 --rc genhtml_branch_coverage=1 00:05:20.437 --rc genhtml_function_coverage=1 00:05:20.437 --rc genhtml_legend=1 00:05:20.437 --rc geninfo_all_blocks=1 00:05:20.437 --rc geninfo_unexecuted_blocks=1 00:05:20.437 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:20.437 ' 00:05:20.437 13:13:28 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:20.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.437 --rc genhtml_branch_coverage=1 00:05:20.437 --rc genhtml_function_coverage=1 00:05:20.437 --rc genhtml_legend=1 00:05:20.437 --rc geninfo_all_blocks=1 00:05:20.437 --rc geninfo_unexecuted_blocks=1 00:05:20.437 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:20.437 ' 00:05:20.437 13:13:28 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:20.437 13:13:28 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:20.437 13:13:28 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:20.437 13:13:28 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:20.437 13:13:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.437 13:13:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.437 13:13:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.437 ************************************ 00:05:20.437 START TEST default_locks 00:05:20.437 ************************************ 00:05:20.437 13:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:20.437 13:13:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3835564 00:05:20.437 13:13:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3835564 00:05:20.437 13:13:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.437 13:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3835564 ']' 00:05:20.437 13:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.437 13:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.437 13:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.437 13:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.437 13:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.437 [2024-10-17 13:13:28.417932] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:20.437 [2024-10-17 13:13:28.417990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3835564 ] 00:05:20.696 [2024-10-17 13:13:28.488269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.696 [2024-10-17 13:13:28.529829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.696 13:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.696 13:13:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:20.696 13:13:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3835564 00:05:20.696 13:13:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.696 13:13:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3835564 00:05:21.264 lslocks: write error 00:05:21.264 13:13:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3835564 00:05:21.264 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 3835564 ']' 00:05:21.264 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 3835564 00:05:21.264 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:21.264 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:21.264 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3835564 00:05:21.264 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:21.265 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:21.265 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3835564' 00:05:21.265 killing process with pid 3835564 00:05:21.265 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 3835564 00:05:21.265 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 3835564 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3835564 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3835564 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3835564 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3835564 ']' 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.524 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3835564) - No such process 00:05:21.524 ERROR: process (pid: 3835564) is no longer running 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:21.524 00:05:21.524 real 0m1.041s 00:05:21.524 user 0m1.003s 00:05:21.524 sys 0m0.500s 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.524 13:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.524 ************************************ 00:05:21.524 END TEST default_locks 00:05:21.524 ************************************ 00:05:21.524 13:13:29 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:21.524 13:13:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.524 13:13:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.524 13:13:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.524 ************************************ 00:05:21.524 START TEST default_locks_via_rpc 00:05:21.524 ************************************ 00:05:21.524 13:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:21.524 13:13:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3835715 00:05:21.524 13:13:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3835715 00:05:21.524 13:13:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.524 13:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3835715 ']' 00:05:21.524 13:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.524 13:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.524 13:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.524 13:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.524 13:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.524 [2024-10-17 13:13:29.540393] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:21.524 [2024-10-17 13:13:29.540455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3835715 ] 00:05:21.784 [2024-10-17 13:13:29.610173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.784 [2024-10-17 13:13:29.653745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.042 13:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.042 13:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:22.042 13:13:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:22.042 13:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.042 13:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.042 13:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.042 13:13:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:22.042 13:13:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:22.042 13:13:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:22.042 13:13:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:22.042 13:13:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:22.042 13:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.042 13:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.042 13:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.042 13:13:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3835715 00:05:22.042 13:13:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3835715 00:05:22.042 13:13:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.611 13:13:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3835715 00:05:22.611 13:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 3835715 ']' 00:05:22.611 13:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 3835715 00:05:22.611 13:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:22.611 13:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.611 13:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3835715 00:05:22.611 13:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.611 13:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.611 13:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3835715' 00:05:22.611 killing process with pid 3835715 00:05:22.611 13:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 3835715 00:05:22.611 13:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 3835715 00:05:22.870 00:05:22.870 real 0m1.273s 00:05:22.870 user 0m1.269s 00:05:22.870 sys 0m0.582s 00:05:22.870 13:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.870 13:13:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.870 ************************************ 00:05:22.870 END TEST default_locks_via_rpc 00:05:22.870 ************************************ 00:05:22.870 13:13:30 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:22.870 13:13:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.870 13:13:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.870 13:13:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.870 ************************************ 00:05:22.870 START TEST non_locking_app_on_locked_coremask 00:05:22.870 ************************************ 00:05:22.870 13:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:22.870 13:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3836016 00:05:22.870 13:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3836016 /var/tmp/spdk.sock 00:05:22.870 13:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.870 13:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3836016 ']' 00:05:22.871 13:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.871 13:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.871 13:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.871 13:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.871 13:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.871 [2024-10-17 13:13:30.901047] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:22.871 [2024-10-17 13:13:30.901136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3836016 ] 00:05:23.130 [2024-10-17 13:13:30.969538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.130 [2024-10-17 13:13:31.012470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.389 13:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.389 13:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:23.389 13:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3836053 00:05:23.389 13:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3836053 /var/tmp/spdk2.sock 00:05:23.389 13:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:23.389 13:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3836053 ']' 00:05:23.389 13:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.389 13:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.389 13:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.389 13:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.389 13:13:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.389 [2024-10-17 13:13:31.245763] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:23.389 [2024-10-17 13:13:31.245830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3836053 ] 00:05:23.389 [2024-10-17 13:13:31.338595] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.389 [2024-10-17 13:13:31.338623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.389 [2024-10-17 13:13:31.421802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.328 13:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.328 13:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:24.328 13:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3836016 00:05:24.328 13:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3836016 00:05:24.328 13:13:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.266 lslocks: write error 00:05:25.266 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3836016 00:05:25.266 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3836016 ']' 00:05:25.266 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3836016 00:05:25.266 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:25.266 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.266 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3836016 00:05:25.266 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.266 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.266 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3836016' 00:05:25.266 killing process with pid 3836016 00:05:25.266 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3836016 00:05:25.266 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3836016 00:05:25.838 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3836053 00:05:25.838 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3836053 ']' 00:05:25.838 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3836053 00:05:25.838 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:25.838 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.838 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3836053 00:05:25.838 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.838 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.838 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3836053' 00:05:25.838 killing process with pid 3836053 00:05:25.838 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3836053 00:05:25.838 13:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3836053 00:05:26.097 00:05:26.097 real 0m3.194s 00:05:26.097 user 0m3.375s 00:05:26.097 sys 0m1.182s 00:05:26.097 13:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.097 13:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.097 ************************************ 00:05:26.097 END TEST non_locking_app_on_locked_coremask 00:05:26.097 ************************************ 00:05:26.097 13:13:34 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:26.097 13:13:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.097 13:13:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.097 13:13:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.357 ************************************ 00:05:26.357 START TEST locking_app_on_unlocked_coremask 00:05:26.357 ************************************ 00:05:26.357 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:26.357 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3836588 00:05:26.357 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3836588 /var/tmp/spdk.sock 00:05:26.357 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:26.357 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3836588 ']' 00:05:26.357 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.357 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.357 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.357 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.357 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.357 [2024-10-17 13:13:34.175493] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:26.357 [2024-10-17 13:13:34.175561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3836588 ] 00:05:26.357 [2024-10-17 13:13:34.244997] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.357 [2024-10-17 13:13:34.245024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.357 [2024-10-17 13:13:34.287865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.617 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.617 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:26.617 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3836729 00:05:26.617 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3836729 /var/tmp/spdk2.sock 00:05:26.617 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:26.617 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3836729 ']' 00:05:26.617 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.617 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.617 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.617 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.617 13:13:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.617 [2024-10-17 13:13:34.534304] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:26.617 [2024-10-17 13:13:34.534393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3836729 ] 00:05:26.617 [2024-10-17 13:13:34.622687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.876 [2024-10-17 13:13:34.715180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.445 13:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.445 13:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:27.445 13:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3836729 00:05:27.445 13:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3836729 00:05:27.445 13:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.012 lslocks: write error 00:05:28.012 13:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3836588 00:05:28.012 13:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3836588 ']' 00:05:28.012 13:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3836588 00:05:28.012 13:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:28.012 13:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.012 13:13:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3836588 00:05:28.012 13:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.012 13:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.012 13:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3836588' 00:05:28.012 killing process with pid 3836588 00:05:28.012 13:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3836588 00:05:28.012 13:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3836588 00:05:28.581 13:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3836729 00:05:28.581 13:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3836729 ']' 00:05:28.581 13:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3836729 00:05:28.581 13:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:28.581 13:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.581 13:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3836729 00:05:28.841 13:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.841 13:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.841 13:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3836729' 00:05:28.841 killing process with pid 3836729 00:05:28.841 13:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3836729 00:05:28.841 13:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3836729 00:05:29.100 00:05:29.100 real 0m2.804s 00:05:29.100 user 0m2.937s 00:05:29.100 sys 0m1.034s 00:05:29.100 13:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.100 13:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.100 ************************************ 00:05:29.100 END TEST locking_app_on_unlocked_coremask 00:05:29.100 ************************************ 00:05:29.100 13:13:36 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:29.100 13:13:36 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.100 13:13:36 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.100 13:13:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.100 ************************************ 00:05:29.100 START TEST locking_app_on_locked_coremask 00:05:29.101 ************************************ 00:05:29.101 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:29.101 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3837156 00:05:29.101 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3837156 /var/tmp/spdk.sock 00:05:29.101 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.101 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3837156 ']' 00:05:29.101 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.101 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.101 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.101 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.101 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.101 [2024-10-17 13:13:37.061252] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:29.101 [2024-10-17 13:13:37.061329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3837156 ] 00:05:29.101 [2024-10-17 13:13:37.130106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.360 [2024-10-17 13:13:37.174593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.360 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.360 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:29.360 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3837195 00:05:29.360 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3837195 /var/tmp/spdk2.sock 00:05:29.360 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:29.360 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:29.360 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3837195 /var/tmp/spdk2.sock 00:05:29.360 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:29.360 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.360 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:29.360 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.360 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3837195 /var/tmp/spdk2.sock 00:05:29.360 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3837195 ']' 00:05:29.360 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.360 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.360 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.360 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.360 13:13:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.360 [2024-10-17 13:13:37.406713] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:29.360 [2024-10-17 13:13:37.406807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3837195 ] 00:05:29.619 [2024-10-17 13:13:37.498350] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3837156 has claimed it. 00:05:29.619 [2024-10-17 13:13:37.498389] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:30.186 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3837195) - No such process 00:05:30.186 ERROR: process (pid: 3837195) is no longer running 00:05:30.186 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.186 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:30.186 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:30.186 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:30.186 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:30.186 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:30.186 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3837156 00:05:30.186 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3837156 00:05:30.186 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.754 lslocks: write error 00:05:30.754 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3837156 00:05:30.754 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3837156 ']' 00:05:30.754 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3837156 00:05:30.754 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:30.754 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:30.754 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3837156 00:05:30.754 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:30.754 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:30.754 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3837156' 00:05:30.754 killing process with pid 3837156 00:05:30.754 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3837156 00:05:30.754 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3837156 00:05:31.013 00:05:31.013 real 0m1.881s 00:05:31.013 user 0m2.006s 00:05:31.013 sys 0m0.683s 00:05:31.013 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.013 13:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.013 ************************************ 00:05:31.013 END TEST locking_app_on_locked_coremask 00:05:31.013 ************************************ 00:05:31.013 13:13:38 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:31.013 13:13:38 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.013 13:13:38 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.013 13:13:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.013 ************************************ 00:05:31.013 START TEST locking_overlapped_coremask 00:05:31.013 ************************************ 00:05:31.013 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:31.013 13:13:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3837555 00:05:31.013 13:13:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3837555 /var/tmp/spdk.sock 00:05:31.013 13:13:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:31.013 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3837555 ']' 00:05:31.013 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.013 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.013 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.013 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.013 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.013 [2024-10-17 13:13:39.027481] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:31.013 [2024-10-17 13:13:39.027540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3837555 ] 00:05:31.273 [2024-10-17 13:13:39.095028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.273 [2024-10-17 13:13:39.140855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.273 [2024-10-17 13:13:39.140948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.273 [2024-10-17 13:13:39.140951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.532 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.532 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:31.532 13:13:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3837713 00:05:31.532 13:13:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3837713 /var/tmp/spdk2.sock 00:05:31.532 13:13:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:31.532 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:31.532 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3837713 /var/tmp/spdk2.sock 00:05:31.532 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:31.532 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.532 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:31.532 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.532 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3837713 /var/tmp/spdk2.sock 00:05:31.532 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3837713 ']' 00:05:31.532 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.532 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.532 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.532 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.532 13:13:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.532 [2024-10-17 13:13:39.378294] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:31.532 [2024-10-17 13:13:39.378356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3837713 ] 00:05:31.532 [2024-10-17 13:13:39.471891] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3837555 has claimed it. 00:05:31.532 [2024-10-17 13:13:39.471929] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:32.100 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3837713) - No such process 00:05:32.100 ERROR: process (pid: 3837713) is no longer running 00:05:32.100 13:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.100 13:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:32.100 13:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:32.100 13:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:32.100 13:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:32.100 13:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:32.100 13:13:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:32.101 13:13:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:32.101 13:13:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:32.101 13:13:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:32.101 13:13:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3837555 00:05:32.101 13:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 3837555 ']' 00:05:32.101 13:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 3837555 00:05:32.101 13:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:32.101 13:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.101 13:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3837555 00:05:32.101 13:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.101 13:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.101 13:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3837555' 00:05:32.101 killing process with pid 3837555 00:05:32.101 13:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 3837555 00:05:32.101 13:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 3837555 00:05:32.359 00:05:32.359 real 0m1.404s 00:05:32.359 user 0m3.917s 00:05:32.359 sys 0m0.408s 00:05:32.359 13:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.359 13:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.359 ************************************ 00:05:32.359 END TEST locking_overlapped_coremask 00:05:32.359 ************************************ 00:05:32.618 13:13:40 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:32.618 13:13:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.618 13:13:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.618 13:13:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.618 ************************************ 00:05:32.618 START TEST locking_overlapped_coremask_via_rpc 00:05:32.618 ************************************ 00:05:32.618 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:32.618 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3837822 00:05:32.618 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3837822 /var/tmp/spdk.sock 00:05:32.618 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:32.618 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3837822 ']' 00:05:32.618 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.618 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.618 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.618 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.618 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.618 [2024-10-17 13:13:40.514198] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:32.619 [2024-10-17 13:13:40.514258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3837822 ] 00:05:32.619 [2024-10-17 13:13:40.581477] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.619 [2024-10-17 13:13:40.581510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:32.619 [2024-10-17 13:13:40.625035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.619 [2024-10-17 13:13:40.625128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.619 [2024-10-17 13:13:40.625131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.876 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.876 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:32.876 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3837997 00:05:32.876 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3837997 /var/tmp/spdk2.sock 00:05:32.876 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:32.876 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3837997 ']' 00:05:32.876 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.876 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.876 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.876 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.876 13:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.876 [2024-10-17 13:13:40.858206] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:32.876 [2024-10-17 13:13:40.858271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3837997 ] 00:05:33.134 [2024-10-17 13:13:40.949945] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.134 [2024-10-17 13:13:40.949976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:33.134 [2024-10-17 13:13:41.034737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.134 [2024-10-17 13:13:41.038200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.134 [2024-10-17 13:13:41.038202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:33.700 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.700 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:33.700 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:33.700 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.700 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.700 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.700 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:33.700 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:33.700 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:33.700 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:33.700 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.700 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:33.700 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.700 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:33.700 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.700 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.700 [2024-10-17 13:13:41.740211] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3837822 has claimed it. 00:05:33.700 request: 00:05:33.700 { 00:05:33.700 "method": "framework_enable_cpumask_locks", 00:05:33.700 "req_id": 1 00:05:33.700 } 00:05:33.700 Got JSON-RPC error response 00:05:33.700 response: 00:05:33.700 { 00:05:33.700 "code": -32603, 00:05:33.700 "message": "Failed to claim CPU core: 2" 00:05:33.700 } 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3837822 /var/tmp/spdk.sock 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3837822 ']' 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3837997 /var/tmp/spdk2.sock 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3837997 ']' 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.959 13:13:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.219 13:13:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.219 13:13:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:34.219 13:13:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:34.219 13:13:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:34.219 13:13:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:34.219 13:13:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:34.219 00:05:34.219 real 0m1.659s 00:05:34.219 user 0m0.792s 00:05:34.219 sys 0m0.154s 00:05:34.219 13:13:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.219 13:13:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.219 ************************************ 00:05:34.219 END TEST locking_overlapped_coremask_via_rpc 00:05:34.219 ************************************ 00:05:34.219 13:13:42 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:34.219 13:13:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3837822 ]] 00:05:34.219 13:13:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3837822 00:05:34.219 13:13:42 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3837822 ']' 00:05:34.219 13:13:42 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3837822 00:05:34.219 13:13:42 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:34.219 13:13:42 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.219 13:13:42 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3837822 00:05:34.219 13:13:42 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.219 13:13:42 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.219 13:13:42 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3837822' 00:05:34.219 killing process with pid 3837822 00:05:34.219 13:13:42 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3837822 00:05:34.219 13:13:42 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3837822 00:05:34.788 13:13:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3837997 ]] 00:05:34.788 13:13:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3837997 00:05:34.788 13:13:42 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3837997 ']' 00:05:34.788 13:13:42 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3837997 00:05:34.788 13:13:42 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:34.788 13:13:42 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.788 13:13:42 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3837997 00:05:34.788 13:13:42 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:34.788 13:13:42 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:34.788 13:13:42 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3837997' 00:05:34.788 killing process with pid 3837997 00:05:34.788 13:13:42 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3837997 00:05:34.788 13:13:42 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3837997 00:05:35.048 13:13:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:35.048 13:13:42 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:35.048 13:13:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3837822 ]] 00:05:35.048 13:13:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3837822 00:05:35.048 13:13:42 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3837822 ']' 00:05:35.048 13:13:42 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3837822 00:05:35.048 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3837822) - No such process 00:05:35.048 13:13:42 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3837822 is not found' 00:05:35.048 Process with pid 3837822 is not found 00:05:35.048 13:13:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3837997 ]] 00:05:35.048 13:13:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3837997 00:05:35.048 13:13:42 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3837997 ']' 00:05:35.048 13:13:42 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3837997 00:05:35.048 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3837997) - No such process 00:05:35.048 13:13:42 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3837997 is not found' 00:05:35.048 Process with pid 3837997 is not found 00:05:35.048 13:13:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:35.048 00:05:35.048 real 0m14.779s 00:05:35.048 user 0m25.124s 00:05:35.048 sys 0m5.638s 00:05:35.048 13:13:42 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.048 13:13:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.048 ************************************ 00:05:35.048 END TEST cpu_locks 00:05:35.048 ************************************ 00:05:35.048 00:05:35.048 real 0m39.616s 00:05:35.048 user 1m14.322s 00:05:35.048 sys 0m9.940s 00:05:35.048 13:13:42 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.048 13:13:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.048 ************************************ 00:05:35.048 END TEST event 00:05:35.049 ************************************ 00:05:35.049 13:13:43 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:35.049 13:13:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.049 13:13:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.049 13:13:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.049 ************************************ 00:05:35.049 START TEST thread 00:05:35.049 ************************************ 00:05:35.049 13:13:43 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:35.308 * Looking for test storage... 00:05:35.308 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:05:35.308 13:13:43 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:35.308 13:13:43 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:35.308 13:13:43 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:35.308 13:13:43 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:35.308 13:13:43 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.309 13:13:43 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.309 13:13:43 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.309 13:13:43 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.309 13:13:43 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.309 13:13:43 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.309 13:13:43 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.309 13:13:43 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.309 13:13:43 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.309 13:13:43 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.309 13:13:43 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.309 13:13:43 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:35.309 13:13:43 thread -- scripts/common.sh@345 -- # : 1 00:05:35.309 13:13:43 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.309 13:13:43 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.309 13:13:43 thread -- scripts/common.sh@365 -- # decimal 1 00:05:35.309 13:13:43 thread -- scripts/common.sh@353 -- # local d=1 00:05:35.309 13:13:43 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.309 13:13:43 thread -- scripts/common.sh@355 -- # echo 1 00:05:35.309 13:13:43 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.309 13:13:43 thread -- scripts/common.sh@366 -- # decimal 2 00:05:35.309 13:13:43 thread -- scripts/common.sh@353 -- # local d=2 00:05:35.309 13:13:43 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.309 13:13:43 thread -- scripts/common.sh@355 -- # echo 2 00:05:35.309 13:13:43 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.309 13:13:43 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.309 13:13:43 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.309 13:13:43 thread -- scripts/common.sh@368 -- # return 0 00:05:35.309 13:13:43 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.309 13:13:43 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:35.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.309 --rc genhtml_branch_coverage=1 00:05:35.309 --rc genhtml_function_coverage=1 00:05:35.309 --rc genhtml_legend=1 00:05:35.309 --rc geninfo_all_blocks=1 00:05:35.309 --rc geninfo_unexecuted_blocks=1 00:05:35.309 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:35.309 ' 00:05:35.309 13:13:43 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:35.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.309 --rc genhtml_branch_coverage=1 00:05:35.309 --rc genhtml_function_coverage=1 00:05:35.309 --rc genhtml_legend=1 00:05:35.309 --rc geninfo_all_blocks=1 00:05:35.309 --rc geninfo_unexecuted_blocks=1 00:05:35.309 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:35.309 ' 00:05:35.309 13:13:43 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:35.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.309 --rc genhtml_branch_coverage=1 00:05:35.309 --rc genhtml_function_coverage=1 00:05:35.309 --rc genhtml_legend=1 00:05:35.309 --rc geninfo_all_blocks=1 00:05:35.309 --rc geninfo_unexecuted_blocks=1 00:05:35.309 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:35.309 ' 00:05:35.309 13:13:43 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:35.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.309 --rc genhtml_branch_coverage=1 00:05:35.309 --rc genhtml_function_coverage=1 00:05:35.309 --rc genhtml_legend=1 00:05:35.309 --rc geninfo_all_blocks=1 00:05:35.309 --rc geninfo_unexecuted_blocks=1 00:05:35.309 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:35.309 ' 00:05:35.309 13:13:43 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:35.309 13:13:43 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:35.309 13:13:43 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.309 13:13:43 thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.309 ************************************ 00:05:35.309 START TEST thread_poller_perf 00:05:35.309 ************************************ 00:05:35.309 13:13:43 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:35.309 [2024-10-17 13:13:43.246273] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:35.309 [2024-10-17 13:13:43.246355] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3838402 ] 00:05:35.309 [2024-10-17 13:13:43.315309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.309 [2024-10-17 13:13:43.354723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.309 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:36.689 [2024-10-17T11:13:44.741Z] ====================================== 00:05:36.689 [2024-10-17T11:13:44.741Z] busy:2503522612 (cyc) 00:05:36.689 [2024-10-17T11:13:44.741Z] total_run_count: 861000 00:05:36.689 [2024-10-17T11:13:44.741Z] tsc_hz: 2500000000 (cyc) 00:05:36.689 [2024-10-17T11:13:44.741Z] ====================================== 00:05:36.689 [2024-10-17T11:13:44.741Z] poller_cost: 2907 (cyc), 1162 (nsec) 00:05:36.689 00:05:36.689 real 0m1.163s 00:05:36.689 user 0m1.082s 00:05:36.689 sys 0m0.076s 00:05:36.689 13:13:44 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.689 13:13:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.689 ************************************ 00:05:36.689 END TEST thread_poller_perf 00:05:36.690 ************************************ 00:05:36.690 13:13:44 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:36.690 13:13:44 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:36.690 13:13:44 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.690 13:13:44 thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.690 ************************************ 00:05:36.690 START TEST thread_poller_perf 00:05:36.690 ************************************ 00:05:36.690 13:13:44 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:36.690 [2024-10-17 13:13:44.498448] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:36.690 [2024-10-17 13:13:44.498531] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3838685 ] 00:05:36.690 [2024-10-17 13:13:44.569498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.690 [2024-10-17 13:13:44.610036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.690 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:37.628 [2024-10-17T11:13:45.680Z] ====================================== 00:05:37.628 [2024-10-17T11:13:45.680Z] busy:2501276732 (cyc) 00:05:37.628 [2024-10-17T11:13:45.680Z] total_run_count: 13646000 00:05:37.628 [2024-10-17T11:13:45.680Z] tsc_hz: 2500000000 (cyc) 00:05:37.628 [2024-10-17T11:13:45.680Z] ====================================== 00:05:37.628 [2024-10-17T11:13:45.680Z] poller_cost: 183 (cyc), 73 (nsec) 00:05:37.628 00:05:37.628 real 0m1.165s 00:05:37.628 user 0m1.080s 00:05:37.628 sys 0m0.081s 00:05:37.628 13:13:45 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.628 13:13:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.628 ************************************ 00:05:37.628 END TEST thread_poller_perf 00:05:37.628 ************************************ 00:05:37.888 13:13:45 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:05:37.888 13:13:45 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:37.888 13:13:45 thread -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.888 13:13:45 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.888 13:13:45 thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.888 ************************************ 00:05:37.888 START TEST thread_spdk_lock 00:05:37.888 ************************************ 00:05:37.888 13:13:45 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:37.888 [2024-10-17 13:13:45.733290] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:37.888 [2024-10-17 13:13:45.733350] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3838967 ] 00:05:37.888 [2024-10-17 13:13:45.799706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.888 [2024-10-17 13:13:45.838802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.888 [2024-10-17 13:13:45.838804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.457 [2024-10-17 13:13:46.336849] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:38.457 [2024-10-17 13:13:46.336883] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3099:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:38.457 [2024-10-17 13:13:46.336893] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3054:sspin_stacks_print: *ERROR*: spinlock 0x14ca840 00:05:38.457 [2024-10-17 13:13:46.337654] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:38.457 [2024-10-17 13:13:46.337760] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:38.457 [2024-10-17 13:13:46.337779] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:38.457 Starting test contend 00:05:38.457 Worker Delay Wait us Hold us Total us 00:05:38.457 0 3 171981 190440 362421 00:05:38.457 1 5 88276 290707 378983 00:05:38.457 PASS test contend 00:05:38.457 Starting test hold_by_poller 00:05:38.457 PASS test hold_by_poller 00:05:38.457 Starting test hold_by_message 00:05:38.457 PASS test hold_by_message 00:05:38.457 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:05:38.457 100014 assertions passed 00:05:38.457 0 assertions failed 00:05:38.457 00:05:38.457 real 0m0.650s 00:05:38.457 user 0m1.070s 00:05:38.457 sys 0m0.075s 00:05:38.457 13:13:46 thread.thread_spdk_lock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.457 13:13:46 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:05:38.457 ************************************ 00:05:38.457 END TEST thread_spdk_lock 00:05:38.457 ************************************ 00:05:38.457 00:05:38.457 real 0m3.359s 00:05:38.457 user 0m3.391s 00:05:38.457 sys 0m0.487s 00:05:38.457 13:13:46 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.457 13:13:46 thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.457 ************************************ 00:05:38.457 END TEST thread 00:05:38.457 ************************************ 00:05:38.457 13:13:46 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:38.457 13:13:46 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:05:38.457 13:13:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.457 13:13:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.457 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:05:38.457 ************************************ 00:05:38.457 START TEST app_cmdline 00:05:38.457 ************************************ 00:05:38.457 13:13:46 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:05:38.743 * Looking for test storage... 00:05:38.743 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:05:38.743 13:13:46 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:38.743 13:13:46 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:38.743 13:13:46 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:38.743 13:13:46 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.743 13:13:46 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:38.743 13:13:46 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.743 13:13:46 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:38.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.743 --rc genhtml_branch_coverage=1 00:05:38.743 --rc genhtml_function_coverage=1 00:05:38.743 --rc genhtml_legend=1 00:05:38.743 --rc geninfo_all_blocks=1 00:05:38.743 --rc geninfo_unexecuted_blocks=1 00:05:38.743 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:38.743 ' 00:05:38.743 13:13:46 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:38.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.743 --rc genhtml_branch_coverage=1 00:05:38.743 --rc genhtml_function_coverage=1 00:05:38.743 --rc genhtml_legend=1 00:05:38.743 --rc geninfo_all_blocks=1 00:05:38.743 --rc geninfo_unexecuted_blocks=1 00:05:38.743 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:38.743 ' 00:05:38.743 13:13:46 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:38.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.743 --rc genhtml_branch_coverage=1 00:05:38.743 --rc genhtml_function_coverage=1 00:05:38.743 --rc genhtml_legend=1 00:05:38.743 --rc geninfo_all_blocks=1 00:05:38.743 --rc geninfo_unexecuted_blocks=1 00:05:38.743 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:38.743 ' 00:05:38.743 13:13:46 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:38.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.743 --rc genhtml_branch_coverage=1 00:05:38.743 --rc genhtml_function_coverage=1 00:05:38.743 --rc genhtml_legend=1 00:05:38.743 --rc geninfo_all_blocks=1 00:05:38.743 --rc geninfo_unexecuted_blocks=1 00:05:38.743 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:38.743 ' 00:05:38.743 13:13:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:38.744 13:13:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3839140 00:05:38.744 13:13:46 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:38.744 13:13:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3839140 00:05:38.744 13:13:46 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 3839140 ']' 00:05:38.744 13:13:46 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.744 13:13:46 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.744 13:13:46 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.744 13:13:46 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.744 13:13:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:38.744 [2024-10-17 13:13:46.711552] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:38.744 [2024-10-17 13:13:46.711642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3839140 ] 00:05:38.744 [2024-10-17 13:13:46.779519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.076 [2024-10-17 13:13:46.823394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.076 13:13:47 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.076 13:13:47 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:39.076 13:13:47 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:39.342 { 00:05:39.342 "version": "SPDK v25.01-pre git sha1 cca20a51a", 00:05:39.342 "fields": { 00:05:39.342 "major": 25, 00:05:39.342 "minor": 1, 00:05:39.342 "patch": 0, 00:05:39.342 "suffix": "-pre", 00:05:39.342 "commit": "cca20a51a" 00:05:39.342 } 00:05:39.342 } 00:05:39.342 13:13:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:39.342 13:13:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:39.342 13:13:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:39.342 13:13:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:39.342 13:13:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:39.342 13:13:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:39.342 13:13:47 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:39.342 13:13:47 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.342 13:13:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:39.342 13:13:47 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.342 13:13:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:39.342 13:13:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:39.342 13:13:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:39.342 13:13:47 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:39.342 13:13:47 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:39.342 13:13:47 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:05:39.342 13:13:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.342 13:13:47 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:05:39.342 13:13:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.342 13:13:47 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:05:39.342 13:13:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.342 13:13:47 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:05:39.342 13:13:47 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:05:39.342 13:13:47 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:39.601 request: 00:05:39.601 { 00:05:39.601 "method": "env_dpdk_get_mem_stats", 00:05:39.601 "req_id": 1 00:05:39.601 } 00:05:39.601 Got JSON-RPC error response 00:05:39.601 response: 00:05:39.601 { 00:05:39.601 "code": -32601, 00:05:39.601 "message": "Method not found" 00:05:39.601 } 00:05:39.601 13:13:47 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:39.601 13:13:47 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.601 13:13:47 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:39.602 13:13:47 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.602 13:13:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3839140 00:05:39.602 13:13:47 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 3839140 ']' 00:05:39.602 13:13:47 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 3839140 00:05:39.602 13:13:47 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:39.602 13:13:47 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.602 13:13:47 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3839140 00:05:39.602 13:13:47 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.602 13:13:47 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.602 13:13:47 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3839140' 00:05:39.602 killing process with pid 3839140 00:05:39.602 13:13:47 app_cmdline -- common/autotest_common.sh@969 -- # kill 3839140 00:05:39.602 13:13:47 app_cmdline -- common/autotest_common.sh@974 -- # wait 3839140 00:05:39.861 00:05:39.861 real 0m1.304s 00:05:39.861 user 0m1.485s 00:05:39.861 sys 0m0.490s 00:05:39.861 13:13:47 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.861 13:13:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:39.861 ************************************ 00:05:39.861 END TEST app_cmdline 00:05:39.861 ************************************ 00:05:39.861 13:13:47 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:05:39.861 13:13:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.861 13:13:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.861 13:13:47 -- common/autotest_common.sh@10 -- # set +x 00:05:39.861 ************************************ 00:05:39.861 START TEST version 00:05:39.861 ************************************ 00:05:39.861 13:13:47 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:05:40.121 * Looking for test storage... 00:05:40.121 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:05:40.121 13:13:47 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:40.121 13:13:47 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:40.121 13:13:47 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:40.121 13:13:48 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:40.121 13:13:48 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.121 13:13:48 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.121 13:13:48 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.121 13:13:48 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.121 13:13:48 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.121 13:13:48 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.121 13:13:48 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.121 13:13:48 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.121 13:13:48 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.121 13:13:48 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.121 13:13:48 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.121 13:13:48 version -- scripts/common.sh@344 -- # case "$op" in 00:05:40.121 13:13:48 version -- scripts/common.sh@345 -- # : 1 00:05:40.121 13:13:48 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.121 13:13:48 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.121 13:13:48 version -- scripts/common.sh@365 -- # decimal 1 00:05:40.121 13:13:48 version -- scripts/common.sh@353 -- # local d=1 00:05:40.121 13:13:48 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.121 13:13:48 version -- scripts/common.sh@355 -- # echo 1 00:05:40.121 13:13:48 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.121 13:13:48 version -- scripts/common.sh@366 -- # decimal 2 00:05:40.121 13:13:48 version -- scripts/common.sh@353 -- # local d=2 00:05:40.121 13:13:48 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.121 13:13:48 version -- scripts/common.sh@355 -- # echo 2 00:05:40.121 13:13:48 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.121 13:13:48 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.121 13:13:48 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.121 13:13:48 version -- scripts/common.sh@368 -- # return 0 00:05:40.121 13:13:48 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.121 13:13:48 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:40.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.121 --rc genhtml_branch_coverage=1 00:05:40.121 --rc genhtml_function_coverage=1 00:05:40.121 --rc genhtml_legend=1 00:05:40.121 --rc geninfo_all_blocks=1 00:05:40.121 --rc geninfo_unexecuted_blocks=1 00:05:40.121 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.121 ' 00:05:40.121 13:13:48 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:40.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.121 --rc genhtml_branch_coverage=1 00:05:40.121 --rc genhtml_function_coverage=1 00:05:40.121 --rc genhtml_legend=1 00:05:40.121 --rc geninfo_all_blocks=1 00:05:40.121 --rc geninfo_unexecuted_blocks=1 00:05:40.121 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.121 ' 00:05:40.121 13:13:48 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:40.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.121 --rc genhtml_branch_coverage=1 00:05:40.121 --rc genhtml_function_coverage=1 00:05:40.121 --rc genhtml_legend=1 00:05:40.121 --rc geninfo_all_blocks=1 00:05:40.121 --rc geninfo_unexecuted_blocks=1 00:05:40.121 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.122 ' 00:05:40.122 13:13:48 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:40.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.122 --rc genhtml_branch_coverage=1 00:05:40.122 --rc genhtml_function_coverage=1 00:05:40.122 --rc genhtml_legend=1 00:05:40.122 --rc geninfo_all_blocks=1 00:05:40.122 --rc geninfo_unexecuted_blocks=1 00:05:40.122 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.122 ' 00:05:40.122 13:13:48 version -- app/version.sh@17 -- # get_header_version major 00:05:40.122 13:13:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:05:40.122 13:13:48 version -- app/version.sh@14 -- # cut -f2 00:05:40.122 13:13:48 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.122 13:13:48 version -- app/version.sh@17 -- # major=25 00:05:40.122 13:13:48 version -- app/version.sh@18 -- # get_header_version minor 00:05:40.122 13:13:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:05:40.122 13:13:48 version -- app/version.sh@14 -- # cut -f2 00:05:40.122 13:13:48 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.122 13:13:48 version -- app/version.sh@18 -- # minor=1 00:05:40.122 13:13:48 version -- app/version.sh@19 -- # get_header_version patch 00:05:40.122 13:13:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:05:40.122 13:13:48 version -- app/version.sh@14 -- # cut -f2 00:05:40.122 13:13:48 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.122 13:13:48 version -- app/version.sh@19 -- # patch=0 00:05:40.122 13:13:48 version -- app/version.sh@20 -- # get_header_version suffix 00:05:40.122 13:13:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:05:40.122 13:13:48 version -- app/version.sh@14 -- # cut -f2 00:05:40.122 13:13:48 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.122 13:13:48 version -- app/version.sh@20 -- # suffix=-pre 00:05:40.122 13:13:48 version -- app/version.sh@22 -- # version=25.1 00:05:40.122 13:13:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:40.122 13:13:48 version -- app/version.sh@28 -- # version=25.1rc0 00:05:40.122 13:13:48 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:05:40.122 13:13:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:40.122 13:13:48 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:40.122 13:13:48 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:40.122 00:05:40.122 real 0m0.232s 00:05:40.122 user 0m0.138s 00:05:40.122 sys 0m0.144s 00:05:40.122 13:13:48 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.122 13:13:48 version -- common/autotest_common.sh@10 -- # set +x 00:05:40.122 ************************************ 00:05:40.122 END TEST version 00:05:40.122 ************************************ 00:05:40.122 13:13:48 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:40.122 13:13:48 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:40.122 13:13:48 -- spdk/autotest.sh@194 -- # uname -s 00:05:40.122 13:13:48 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:40.122 13:13:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:40.122 13:13:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:40.122 13:13:48 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:40.122 13:13:48 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:40.122 13:13:48 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:40.122 13:13:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:40.122 13:13:48 -- common/autotest_common.sh@10 -- # set +x 00:05:40.381 13:13:48 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:40.381 13:13:48 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:40.381 13:13:48 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:05:40.381 13:13:48 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:05:40.381 13:13:48 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:05:40.381 13:13:48 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:05:40.381 13:13:48 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:05:40.381 13:13:48 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:05:40.381 13:13:48 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:05:40.381 13:13:48 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:05:40.381 13:13:48 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:05:40.381 13:13:48 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:05:40.381 13:13:48 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:05:40.381 13:13:48 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:05:40.381 13:13:48 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:05:40.381 13:13:48 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:05:40.381 13:13:48 -- spdk/autotest.sh@370 -- # [[ 1 -eq 1 ]] 00:05:40.381 13:13:48 -- spdk/autotest.sh@371 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:05:40.381 13:13:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.381 13:13:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.381 13:13:48 -- common/autotest_common.sh@10 -- # set +x 00:05:40.381 ************************************ 00:05:40.381 START TEST llvm_fuzz 00:05:40.381 ************************************ 00:05:40.381 13:13:48 llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:05:40.381 * Looking for test storage... 00:05:40.382 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:05:40.382 13:13:48 llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:40.382 13:13:48 llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:05:40.382 13:13:48 llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:40.382 13:13:48 llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:40.382 13:13:48 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.382 13:13:48 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.382 13:13:48 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.382 13:13:48 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.382 13:13:48 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.382 13:13:48 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.382 13:13:48 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.382 13:13:48 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.382 13:13:48 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.382 13:13:48 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.382 13:13:48 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.382 13:13:48 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:05:40.382 13:13:48 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:05:40.382 13:13:48 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.382 13:13:48 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.382 13:13:48 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:05:40.641 13:13:48 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:05:40.641 13:13:48 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.641 13:13:48 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:05:40.641 13:13:48 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.641 13:13:48 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:05:40.641 13:13:48 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:05:40.641 13:13:48 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.641 13:13:48 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:05:40.641 13:13:48 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.641 13:13:48 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.641 13:13:48 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.641 13:13:48 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:05:40.641 13:13:48 llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.641 13:13:48 llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:40.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.641 --rc genhtml_branch_coverage=1 00:05:40.641 --rc genhtml_function_coverage=1 00:05:40.641 --rc genhtml_legend=1 00:05:40.641 --rc geninfo_all_blocks=1 00:05:40.641 --rc geninfo_unexecuted_blocks=1 00:05:40.641 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.641 ' 00:05:40.641 13:13:48 llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:40.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.641 --rc genhtml_branch_coverage=1 00:05:40.641 --rc genhtml_function_coverage=1 00:05:40.641 --rc genhtml_legend=1 00:05:40.641 --rc geninfo_all_blocks=1 00:05:40.641 --rc geninfo_unexecuted_blocks=1 00:05:40.641 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.641 ' 00:05:40.641 13:13:48 llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:40.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.641 --rc genhtml_branch_coverage=1 00:05:40.641 --rc genhtml_function_coverage=1 00:05:40.642 --rc genhtml_legend=1 00:05:40.642 --rc geninfo_all_blocks=1 00:05:40.642 --rc geninfo_unexecuted_blocks=1 00:05:40.642 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.642 ' 00:05:40.642 13:13:48 llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:40.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.642 --rc genhtml_branch_coverage=1 00:05:40.642 --rc genhtml_function_coverage=1 00:05:40.642 --rc genhtml_legend=1 00:05:40.642 --rc geninfo_all_blocks=1 00:05:40.642 --rc geninfo_unexecuted_blocks=1 00:05:40.642 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.642 ' 00:05:40.642 13:13:48 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:05:40.642 13:13:48 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:05:40.642 13:13:48 llvm_fuzz -- common/autotest_common.sh@548 -- # fuzzers=() 00:05:40.642 13:13:48 llvm_fuzz -- common/autotest_common.sh@548 -- # local fuzzers 00:05:40.642 13:13:48 llvm_fuzz -- common/autotest_common.sh@550 -- # [[ -n '' ]] 00:05:40.642 13:13:48 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:05:40.642 13:13:48 llvm_fuzz -- common/autotest_common.sh@554 -- # fuzzers=("${fuzzers[@]##*/}") 00:05:40.642 13:13:48 llvm_fuzz -- common/autotest_common.sh@557 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:05:40.642 13:13:48 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:05:40.642 13:13:48 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:05:40.642 13:13:48 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:05:40.642 13:13:48 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:05:40.642 13:13:48 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:05:40.642 13:13:48 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:05:40.642 13:13:48 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:05:40.642 13:13:48 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:05:40.642 13:13:48 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:05:40.642 13:13:48 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.642 13:13:48 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.642 13:13:48 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:05:40.642 ************************************ 00:05:40.642 START TEST nvmf_llvm_fuzz 00:05:40.642 ************************************ 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:05:40.642 * Looking for test storage... 00:05:40.642 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:40.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.642 --rc genhtml_branch_coverage=1 00:05:40.642 --rc genhtml_function_coverage=1 00:05:40.642 --rc genhtml_legend=1 00:05:40.642 --rc geninfo_all_blocks=1 00:05:40.642 --rc geninfo_unexecuted_blocks=1 00:05:40.642 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.642 ' 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:40.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.642 --rc genhtml_branch_coverage=1 00:05:40.642 --rc genhtml_function_coverage=1 00:05:40.642 --rc genhtml_legend=1 00:05:40.642 --rc geninfo_all_blocks=1 00:05:40.642 --rc geninfo_unexecuted_blocks=1 00:05:40.642 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.642 ' 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:40.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.642 --rc genhtml_branch_coverage=1 00:05:40.642 --rc genhtml_function_coverage=1 00:05:40.642 --rc genhtml_legend=1 00:05:40.642 --rc geninfo_all_blocks=1 00:05:40.642 --rc geninfo_unexecuted_blocks=1 00:05:40.642 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.642 ' 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:40.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.642 --rc genhtml_branch_coverage=1 00:05:40.642 --rc genhtml_function_coverage=1 00:05:40.642 --rc genhtml_legend=1 00:05:40.642 --rc geninfo_all_blocks=1 00:05:40.642 --rc geninfo_unexecuted_blocks=1 00:05:40.642 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.642 ' 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:40.642 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER=y 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:05:40.643 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_SHARED=n 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_FC=n 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_URING=n 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:05:40.905 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:40.906 #define SPDK_CONFIG_H 00:05:40.906 #define SPDK_CONFIG_AIO_FSDEV 1 00:05:40.906 #define SPDK_CONFIG_APPS 1 00:05:40.906 #define SPDK_CONFIG_ARCH native 00:05:40.906 #undef SPDK_CONFIG_ASAN 00:05:40.906 #undef SPDK_CONFIG_AVAHI 00:05:40.906 #undef SPDK_CONFIG_CET 00:05:40.906 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:05:40.906 #define SPDK_CONFIG_COVERAGE 1 00:05:40.906 #define SPDK_CONFIG_CROSS_PREFIX 00:05:40.906 #undef SPDK_CONFIG_CRYPTO 00:05:40.906 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:40.906 #undef SPDK_CONFIG_CUSTOMOCF 00:05:40.906 #undef SPDK_CONFIG_DAOS 00:05:40.906 #define SPDK_CONFIG_DAOS_DIR 00:05:40.906 #define SPDK_CONFIG_DEBUG 1 00:05:40.906 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:40.906 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:05:40.906 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:40.906 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:40.906 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:40.906 #undef SPDK_CONFIG_DPDK_UADK 00:05:40.906 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:05:40.906 #define SPDK_CONFIG_EXAMPLES 1 00:05:40.906 #undef SPDK_CONFIG_FC 00:05:40.906 #define SPDK_CONFIG_FC_PATH 00:05:40.906 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:40.906 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:40.906 #define SPDK_CONFIG_FSDEV 1 00:05:40.906 #undef SPDK_CONFIG_FUSE 00:05:40.906 #define SPDK_CONFIG_FUZZER 1 00:05:40.906 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:05:40.906 #undef SPDK_CONFIG_GOLANG 00:05:40.906 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:40.906 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:40.906 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:40.906 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:05:40.906 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:40.906 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:40.906 #undef SPDK_CONFIG_HAVE_LZ4 00:05:40.906 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:05:40.906 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:05:40.906 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:40.906 #define SPDK_CONFIG_IDXD 1 00:05:40.906 #define SPDK_CONFIG_IDXD_KERNEL 1 00:05:40.906 #undef SPDK_CONFIG_IPSEC_MB 00:05:40.906 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:40.906 #define SPDK_CONFIG_ISAL 1 00:05:40.906 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:40.906 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:40.906 #define SPDK_CONFIG_LIBDIR 00:05:40.906 #undef SPDK_CONFIG_LTO 00:05:40.906 #define SPDK_CONFIG_MAX_LCORES 128 00:05:40.906 #define SPDK_CONFIG_NVME_CUSE 1 00:05:40.906 #undef SPDK_CONFIG_OCF 00:05:40.906 #define SPDK_CONFIG_OCF_PATH 00:05:40.906 #define SPDK_CONFIG_OPENSSL_PATH 00:05:40.906 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:40.906 #define SPDK_CONFIG_PGO_DIR 00:05:40.906 #undef SPDK_CONFIG_PGO_USE 00:05:40.906 #define SPDK_CONFIG_PREFIX /usr/local 00:05:40.906 #undef SPDK_CONFIG_RAID5F 00:05:40.906 #undef SPDK_CONFIG_RBD 00:05:40.906 #define SPDK_CONFIG_RDMA 1 00:05:40.906 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:40.906 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:40.906 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:40.906 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:40.906 #undef SPDK_CONFIG_SHARED 00:05:40.906 #undef SPDK_CONFIG_SMA 00:05:40.906 #define SPDK_CONFIG_TESTS 1 00:05:40.906 #undef SPDK_CONFIG_TSAN 00:05:40.906 #define SPDK_CONFIG_UBLK 1 00:05:40.906 #define SPDK_CONFIG_UBSAN 1 00:05:40.906 #undef SPDK_CONFIG_UNIT_TESTS 00:05:40.906 #undef SPDK_CONFIG_URING 00:05:40.906 #define SPDK_CONFIG_URING_PATH 00:05:40.906 #undef SPDK_CONFIG_URING_ZNS 00:05:40.906 #undef SPDK_CONFIG_USDT 00:05:40.906 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:40.906 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:40.906 #define SPDK_CONFIG_VFIO_USER 1 00:05:40.906 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:40.906 #define SPDK_CONFIG_VHOST 1 00:05:40.906 #define SPDK_CONFIG_VIRTIO 1 00:05:40.906 #undef SPDK_CONFIG_VTUNE 00:05:40.906 #define SPDK_CONFIG_VTUNE_DIR 00:05:40.906 #define SPDK_CONFIG_WERROR 1 00:05:40.906 #define SPDK_CONFIG_WPDK_DIR 00:05:40.906 #undef SPDK_CONFIG_XNVME 00:05:40.906 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:05:40.906 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:40.907 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j112 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 3839741 ]] 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 3839741 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.vFc90a 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.vFc90a/tests/nvmf /tmp/spdk.vFc90a 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=607576064 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4676853760 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:05:40.908 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=53003370496 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=61730627584 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=8727257088 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=30860550144 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865313792 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4763648 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=12340133888 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=12346126336 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5992448 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=30864261120 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865313792 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=1052672 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=6173048832 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=6173061120 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:05:40.909 * Looking for test storage... 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=53003370496 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=10941849600 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:05:40.909 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1683 -- # true 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:40.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.909 --rc genhtml_branch_coverage=1 00:05:40.909 --rc genhtml_function_coverage=1 00:05:40.909 --rc genhtml_legend=1 00:05:40.909 --rc geninfo_all_blocks=1 00:05:40.909 --rc geninfo_unexecuted_blocks=1 00:05:40.909 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.909 ' 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:40.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.909 --rc genhtml_branch_coverage=1 00:05:40.909 --rc genhtml_function_coverage=1 00:05:40.909 --rc genhtml_legend=1 00:05:40.909 --rc geninfo_all_blocks=1 00:05:40.909 --rc geninfo_unexecuted_blocks=1 00:05:40.909 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.909 ' 00:05:40.909 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:40.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.909 --rc genhtml_branch_coverage=1 00:05:40.910 --rc genhtml_function_coverage=1 00:05:40.910 --rc genhtml_legend=1 00:05:40.910 --rc geninfo_all_blocks=1 00:05:40.910 --rc geninfo_unexecuted_blocks=1 00:05:40.910 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.910 ' 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:40.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.910 --rc genhtml_branch_coverage=1 00:05:40.910 --rc genhtml_function_coverage=1 00:05:40.910 --rc genhtml_legend=1 00:05:40.910 --rc geninfo_all_blocks=1 00:05:40.910 --rc geninfo_unexecuted_blocks=1 00:05:40.910 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.910 ' 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:05:40.910 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:05:41.170 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:05:41.170 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:05:41.170 13:13:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:05:41.170 [2024-10-17 13:13:48.967348] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:41.170 [2024-10-17 13:13:48.967404] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3839806 ] 00:05:41.170 [2024-10-17 13:13:49.143632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.170 [2024-10-17 13:13:49.176993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.429 [2024-10-17 13:13:49.235755] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:41.429 [2024-10-17 13:13:49.252128] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:05:41.429 INFO: Running with entropic power schedule (0xFF, 100). 00:05:41.429 INFO: Seed: 3375181617 00:05:41.429 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:05:41.429 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:05:41.429 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:05:41.429 INFO: A corpus is not provided, starting from an empty corpus 00:05:41.429 #2 INITED exec/s: 0 rss: 65Mb 00:05:41.429 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:05:41.429 This may also happen if the target rejected all inputs we tried so far 00:05:41.429 [2024-10-17 13:13:49.297508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d9) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:41.429 [2024-10-17 13:13:49.297537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:41.689 NEW_FUNC[1/715]: 0x43bbc8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:05:41.689 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:05:41.689 #21 NEW cov: 12169 ft: 12165 corp: 2/95b lim: 320 exec/s: 0 rss: 73Mb L: 94/94 MS: 4 ShuffleBytes-CopyPart-InsertByte-InsertRepeatedBytes- 00:05:41.689 [2024-10-17 13:13:49.618305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:50ffffff cdw10:50505050 cdw11:50505050 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5050505050505050 00:05:41.689 [2024-10-17 13:13:49.618337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:41.689 NEW_FUNC[1/1]: 0x14fb868 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2213 00:05:41.689 #26 NEW cov: 12314 ft: 13063 corp: 3/188b lim: 320 exec/s: 0 rss: 73Mb L: 93/94 MS: 5 CopyPart-CrossOver-ChangeByte-ChangeASCIIInt-InsertRepeatedBytes- 00:05:41.689 [2024-10-17 13:13:49.658307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:50ffffff cdw10:50505050 cdw11:50505050 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5050505050505050 00:05:41.689 [2024-10-17 13:13:49.658335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:41.689 #27 NEW cov: 12320 ft: 13291 corp: 4/281b lim: 320 exec/s: 0 rss: 73Mb L: 93/94 MS: 1 ShuffleBytes- 00:05:41.689 [2024-10-17 13:13:49.718643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:50ffffff cdw10:50505050 cdw11:50505050 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5050505050505050 00:05:41.689 [2024-10-17 13:13:49.718671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:41.689 [2024-10-17 13:13:49.718732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (50) qid:0 cid:5 nsid:50505050 cdw10:50505050 cdw11:50505050 00:05:41.689 [2024-10-17 13:13:49.718747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:41.948 #28 NEW cov: 12406 ft: 13748 corp: 5/421b lim: 320 exec/s: 0 rss: 73Mb L: 140/140 MS: 1 CopyPart- 00:05:41.948 [2024-10-17 13:13:49.778786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d9) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:41.948 [2024-10-17 13:13:49.778813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:41.948 [2024-10-17 13:13:49.778873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:3b3b3b3b cdw11:3b3b3b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x3b3b3b3b3b3b3b3b 00:05:41.948 [2024-10-17 13:13:49.778888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:41.948 #29 NEW cov: 12406 ft: 13913 corp: 6/608b lim: 320 exec/s: 0 rss: 73Mb L: 187/187 MS: 1 InsertRepeatedBytes- 00:05:41.948 [2024-10-17 13:13:49.838919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:50ffffff cdw10:50505050 cdw11:50505050 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5050505050505050 00:05:41.948 [2024-10-17 13:13:49.838951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:41.948 [2024-10-17 13:13:49.839011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (50) qid:0 cid:5 nsid:50505050 cdw10:50505050 cdw11:50505050 00:05:41.948 [2024-10-17 13:13:49.839025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:41.948 #30 NEW cov: 12406 ft: 13998 corp: 7/748b lim: 320 exec/s: 0 rss: 73Mb L: 140/187 MS: 1 ChangeBinInt- 00:05:41.948 [2024-10-17 13:13:49.899111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d9) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:41.948 [2024-10-17 13:13:49.899137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:41.948 [2024-10-17 13:13:49.899200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:05:41.948 [2024-10-17 13:13:49.899215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:41.948 #31 NEW cov: 12406 ft: 14094 corp: 8/936b lim: 320 exec/s: 0 rss: 74Mb L: 188/188 MS: 1 CrossOver- 00:05:41.948 [2024-10-17 13:13:49.939179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:50ffffff cdw10:50505050 cdw11:50505050 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5050505050505050 00:05:41.948 [2024-10-17 13:13:49.939206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:41.948 [2024-10-17 13:13:49.939266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (50) qid:0 cid:5 nsid:50505050 cdw10:50505050 cdw11:50505050 00:05:41.948 [2024-10-17 13:13:49.939280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:41.948 #32 NEW cov: 12406 ft: 14166 corp: 9/1123b lim: 320 exec/s: 0 rss: 74Mb L: 187/188 MS: 1 CopyPart- 00:05:42.208 [2024-10-17 13:13:49.999355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:50ffffff cdw10:50505050 cdw11:50505050 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5050505050505050 00:05:42.208 [2024-10-17 13:13:49.999384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.208 [2024-10-17 13:13:49.999444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (50) qid:0 cid:5 nsid:50505050 cdw10:50505050 cdw11:50505050 00:05:42.208 [2024-10-17 13:13:49.999458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:42.208 #33 NEW cov: 12406 ft: 14223 corp: 10/1263b lim: 320 exec/s: 0 rss: 74Mb L: 140/188 MS: 1 ChangeByte- 00:05:42.208 [2024-10-17 13:13:50.039538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d9) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:42.208 [2024-10-17 13:13:50.039568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.208 [2024-10-17 13:13:50.039633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:3b3b3b3b cdw11:3b3b3b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x3b3b3b3b3b3b3b3b 00:05:42.208 [2024-10-17 13:13:50.039648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:42.208 #34 NEW cov: 12406 ft: 14254 corp: 11/1451b lim: 320 exec/s: 0 rss: 74Mb L: 188/188 MS: 1 InsertByte- 00:05:42.208 [2024-10-17 13:13:50.099715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d9) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:42.208 [2024-10-17 13:13:50.099745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.208 [2024-10-17 13:13:50.099810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3b) qid:0 cid:5 nsid:3b3b3b3b cdw10:3b3b3b3b cdw11:3b3b3b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x3b3b3b3b3b3b3b3b 00:05:42.208 [2024-10-17 13:13:50.099825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:42.208 #35 NEW cov: 12406 ft: 14277 corp: 12/1617b lim: 320 exec/s: 0 rss: 74Mb L: 166/188 MS: 1 EraseBytes- 00:05:42.208 [2024-10-17 13:13:50.159852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:50ffffff cdw10:50505050 cdw11:50505050 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5050505050505050 00:05:42.208 [2024-10-17 13:13:50.159879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.208 [2024-10-17 13:13:50.159937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (50) qid:0 cid:5 nsid:50505050 cdw10:50505050 cdw11:50505050 00:05:42.208 [2024-10-17 13:13:50.159952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:42.208 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:05:42.208 #36 NEW cov: 12429 ft: 14309 corp: 13/1780b lim: 320 exec/s: 0 rss: 74Mb L: 163/188 MS: 1 CopyPart- 00:05:42.208 [2024-10-17 13:13:50.220257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d9) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:42.208 [2024-10-17 13:13:50.220284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.208 [2024-10-17 13:13:50.220348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3b) qid:0 cid:5 nsid:3b3b3b3b cdw10:3b3b3b3b cdw11:3b3b3b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x3b3b3b3b3b3b3b3b 00:05:42.208 [2024-10-17 13:13:50.220363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:42.208 [2024-10-17 13:13:50.220425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3b) qid:0 cid:6 nsid:3b3b3b3b cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffff27ffffff 00:05:42.208 [2024-10-17 13:13:50.220439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:42.208 [2024-10-17 13:13:50.220491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:05:42.208 [2024-10-17 13:13:50.220505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:42.468 #37 NEW cov: 12430 ft: 14632 corp: 14/2059b lim: 320 exec/s: 0 rss: 74Mb L: 279/279 MS: 1 InsertRepeatedBytes- 00:05:42.468 [2024-10-17 13:13:50.280112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d9) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:42.468 [2024-10-17 13:13:50.280139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.468 #38 NEW cov: 12430 ft: 14662 corp: 15/2153b lim: 320 exec/s: 38 rss: 74Mb L: 94/279 MS: 1 CMP- DE: "\001;"- 00:05:42.468 [2024-10-17 13:13:50.320255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:50ffffff cdw10:50505050 cdw11:50505050 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5050505050505050 00:05:42.468 [2024-10-17 13:13:50.320282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.468 [2024-10-17 13:13:50.320342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (50) qid:0 cid:5 nsid:50505050 cdw10:50505050 cdw11:50505050 00:05:42.468 [2024-10-17 13:13:50.320356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:42.468 #39 NEW cov: 12430 ft: 14712 corp: 16/2293b lim: 320 exec/s: 39 rss: 74Mb L: 140/279 MS: 1 ChangeBinInt- 00:05:42.468 [2024-10-17 13:13:50.360488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d9) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:42.468 [2024-10-17 13:13:50.360515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.468 #40 NEW cov: 12430 ft: 14781 corp: 17/2387b lim: 320 exec/s: 40 rss: 74Mb L: 94/279 MS: 1 PersAutoDict- DE: "\001;"- 00:05:42.468 [2024-10-17 13:13:50.400407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d9) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:42.468 [2024-10-17 13:13:50.400434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.468 #41 NEW cov: 12430 ft: 14835 corp: 18/2481b lim: 320 exec/s: 41 rss: 74Mb L: 94/279 MS: 1 ChangeByte- 00:05:42.468 [2024-10-17 13:13:50.440834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d9) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:42.468 [2024-10-17 13:13:50.440860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.468 [2024-10-17 13:13:50.440923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3b) qid:0 cid:5 nsid:3b3b3b3b cdw10:3b3b3b3b cdw11:3b3b3b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x3b3b3b3b3b3b3b3b 00:05:42.468 [2024-10-17 13:13:50.440938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:42.468 [2024-10-17 13:13:50.440997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3b) qid:0 cid:6 nsid:3b3b3b3b cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffff27ffffff 00:05:42.468 [2024-10-17 13:13:50.441011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:42.468 [2024-10-17 13:13:50.441064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:05:42.468 [2024-10-17 13:13:50.441078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:42.468 #42 NEW cov: 12430 ft: 14920 corp: 19/2760b lim: 320 exec/s: 42 rss: 74Mb L: 279/279 MS: 1 ShuffleBytes- 00:05:42.468 [2024-10-17 13:13:50.500781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:50ffffff cdw10:50505050 cdw11:50505050 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5050505050505050 00:05:42.468 [2024-10-17 13:13:50.500806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.468 [2024-10-17 13:13:50.500866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (50) qid:0 cid:5 nsid:50505050 cdw10:50505050 cdw11:50505050 00:05:42.468 [2024-10-17 13:13:50.500881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:42.727 #43 NEW cov: 12430 ft: 14961 corp: 20/2900b lim: 320 exec/s: 43 rss: 74Mb L: 140/279 MS: 1 ChangeBit- 00:05:42.727 [2024-10-17 13:13:50.540869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:50ffffff cdw10:50505050 cdw11:50505050 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5050505050505050 00:05:42.727 [2024-10-17 13:13:50.540896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.727 [2024-10-17 13:13:50.540955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (50) qid:0 cid:5 nsid:50505050 cdw10:50505050 cdw11:50505050 00:05:42.727 [2024-10-17 13:13:50.540969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:42.727 #44 NEW cov: 12430 ft: 14975 corp: 21/3040b lim: 320 exec/s: 44 rss: 74Mb L: 140/279 MS: 1 ChangeBit- 00:05:42.727 [2024-10-17 13:13:50.581010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d9) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:42.727 [2024-10-17 13:13:50.581036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.727 [2024-10-17 13:13:50.581099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:3b3b3b3b cdw11:3b3b3b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x3b3b3b3b3b3b3b3b 00:05:42.727 [2024-10-17 13:13:50.581114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:42.727 #45 NEW cov: 12430 ft: 14982 corp: 22/3228b lim: 320 exec/s: 45 rss: 74Mb L: 188/279 MS: 1 InsertByte- 00:05:42.727 [2024-10-17 13:13:50.621088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d9) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:42.727 [2024-10-17 13:13:50.621114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.727 #46 NEW cov: 12430 ft: 15045 corp: 23/3322b lim: 320 exec/s: 46 rss: 74Mb L: 94/279 MS: 1 ChangeBinInt- 00:05:42.727 [2024-10-17 13:13:50.681444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d9) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:42.727 [2024-10-17 13:13:50.681471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.727 [2024-10-17 13:13:50.681531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:05:42.727 [2024-10-17 13:13:50.681546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:42.727 [2024-10-17 13:13:50.681607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:42.727 [2024-10-17 13:13:50.681622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:42.727 #47 NEW cov: 12430 ft: 15180 corp: 24/3528b lim: 320 exec/s: 47 rss: 74Mb L: 206/279 MS: 1 InsertRepeatedBytes- 00:05:42.727 [2024-10-17 13:13:50.741482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:50ffffff cdw10:50505050 cdw11:50505050 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5050505050505050 00:05:42.728 [2024-10-17 13:13:50.741508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.728 [2024-10-17 13:13:50.741566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (50) qid:0 cid:5 nsid:50505050 cdw10:50505050 cdw11:50505050 00:05:42.728 [2024-10-17 13:13:50.741580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:42.987 #48 NEW cov: 12430 ft: 15183 corp: 25/3669b lim: 320 exec/s: 48 rss: 75Mb L: 141/279 MS: 1 InsertByte- 00:05:42.987 [2024-10-17 13:13:50.801551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d9) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:42.987 [2024-10-17 13:13:50.801577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.987 #49 NEW cov: 12430 ft: 15192 corp: 26/3763b lim: 320 exec/s: 49 rss: 75Mb L: 94/279 MS: 1 ChangeBinInt- 00:05:42.987 [2024-10-17 13:13:50.861834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:50ffffff cdw10:50505050 cdw11:50505050 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5050505050505050 00:05:42.987 [2024-10-17 13:13:50.861860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.987 [2024-10-17 13:13:50.861924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d9) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:42.987 [2024-10-17 13:13:50.861939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:42.987 #50 NEW cov: 12430 ft: 15201 corp: 27/3950b lim: 320 exec/s: 50 rss: 75Mb L: 187/279 MS: 1 CrossOver- 00:05:42.987 [2024-10-17 13:13:50.901916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:50ffffff cdw10:a9a9a9a9 cdw11:a9a9a9a9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5050505050505050 00:05:42.987 [2024-10-17 13:13:50.901942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.987 [2024-10-17 13:13:50.902007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a9) qid:0 cid:5 nsid:a9a9a9a9 cdw10:50505050 cdw11:50505050 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:42.987 [2024-10-17 13:13:50.902022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:42.987 #51 NEW cov: 12430 ft: 15229 corp: 28/4087b lim: 320 exec/s: 51 rss: 75Mb L: 137/279 MS: 1 InsertRepeatedBytes- 00:05:42.987 [2024-10-17 13:13:50.941892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d9) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:42.987 [2024-10-17 13:13:50.941918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.987 #52 NEW cov: 12430 ft: 15236 corp: 29/4181b lim: 320 exec/s: 52 rss: 75Mb L: 94/279 MS: 1 ChangeBinInt- 00:05:42.987 [2024-10-17 13:13:51.002080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d9) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:42.987 [2024-10-17 13:13:51.002106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:42.987 #53 NEW cov: 12430 ft: 15246 corp: 30/4275b lim: 320 exec/s: 53 rss: 75Mb L: 94/279 MS: 1 ShuffleBytes- 00:05:43.247 [2024-10-17 13:13:51.042276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:50ffffff cdw10:50505050 cdw11:50505050 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5050505050505050 00:05:43.247 [2024-10-17 13:13:51.042302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:43.247 [2024-10-17 13:13:51.042365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d9) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:43.247 [2024-10-17 13:13:51.042379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:43.247 #59 NEW cov: 12430 ft: 15264 corp: 31/4462b lim: 320 exec/s: 59 rss: 75Mb L: 187/279 MS: 1 ChangeBinInt- 00:05:43.247 [2024-10-17 13:13:51.102463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:50ffffff cdw10:50505050 cdw11:50505050 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5050505050505050 00:05:43.247 [2024-10-17 13:13:51.102490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:43.247 [2024-10-17 13:13:51.102551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (50) qid:0 cid:5 nsid:50505050 cdw10:50505050 cdw11:50505050 00:05:43.247 [2024-10-17 13:13:51.102566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:43.247 #60 NEW cov: 12430 ft: 15271 corp: 32/4645b lim: 320 exec/s: 60 rss: 75Mb L: 183/279 MS: 1 CrossOver- 00:05:43.247 [2024-10-17 13:13:51.162562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d9) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffff23ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:43.247 [2024-10-17 13:13:51.162593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:43.247 #61 NEW cov: 12430 ft: 15278 corp: 33/4739b lim: 320 exec/s: 61 rss: 75Mb L: 94/279 MS: 1 ChangeByte- 00:05:43.247 [2024-10-17 13:13:51.223045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d9) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:43.247 [2024-10-17 13:13:51.223074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:43.247 [2024-10-17 13:13:51.223135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:3b3b3b3b cdw11:3b3b3b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x3b3b3b3b3b3b3b3b 00:05:43.247 [2024-10-17 13:13:51.223155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:43.247 [2024-10-17 13:13:51.223230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3b) qid:0 cid:6 nsid:3b3b3b3b cdw10:50505050 cdw11:50505050 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5050505050505050 00:05:43.247 [2024-10-17 13:13:51.223245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:43.247 [2024-10-17 13:13:51.223303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (50) qid:0 cid:7 nsid:50505050 cdw10:3b3b3b3b cdw11:3b3b3b3b 00:05:43.247 [2024-10-17 13:13:51.223316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:43.247 #62 NEW cov: 12430 ft: 15372 corp: 34/5008b lim: 320 exec/s: 62 rss: 75Mb L: 269/279 MS: 1 CrossOver- 00:05:43.247 [2024-10-17 13:13:51.282905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:50ffffff cdw10:50505050 cdw11:50505050 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5050505050505050 00:05:43.247 [2024-10-17 13:13:51.282933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:43.247 [2024-10-17 13:13:51.282991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (50) qid:0 cid:5 nsid:50505050 cdw10:50505050 cdw11:50505050 00:05:43.247 [2024-10-17 13:13:51.283005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:43.507 [2024-10-17 13:13:51.343093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:50ffffff cdw10:50505050 cdw11:50505050 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5050505050505050 00:05:43.507 [2024-10-17 13:13:51.343119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:43.507 [2024-10-17 13:13:51.343187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (50) qid:0 cid:5 nsid:50505050 cdw10:50505050 cdw11:50505050 00:05:43.507 [2024-10-17 13:13:51.343201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:43.507 #64 pulse cov: 12430 ft: 15393 corp: 34/5008b lim: 320 exec/s: 32 rss: 75Mb 00:05:43.507 #64 NEW cov: 12430 ft: 15393 corp: 35/5171b lim: 320 exec/s: 32 rss: 75Mb L: 163/279 MS: 2 ShuffleBytes-ChangeBit- 00:05:43.507 #64 DONE cov: 12430 ft: 15393 corp: 35/5171b lim: 320 exec/s: 32 rss: 75Mb 00:05:43.507 ###### Recommended dictionary. ###### 00:05:43.507 "\001;" # Uses: 2 00:05:43.507 ###### End of recommended dictionary. ###### 00:05:43.507 Done 64 runs in 2 second(s) 00:05:43.507 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:05:43.507 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:05:43.507 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:05:43.507 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:05:43.507 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:05:43.507 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:05:43.507 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:05:43.507 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:05:43.507 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:05:43.507 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:05:43.507 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:05:43.507 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:05:43.507 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:05:43.507 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:05:43.507 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:05:43.507 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:05:43.507 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:05:43.507 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:05:43.507 13:13:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:05:43.507 [2024-10-17 13:13:51.516359] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:43.507 [2024-10-17 13:13:51.516428] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840279 ] 00:05:43.766 [2024-10-17 13:13:51.698821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.767 [2024-10-17 13:13:51.732000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.767 [2024-10-17 13:13:51.791084] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:43.767 [2024-10-17 13:13:51.807491] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:05:44.026 INFO: Running with entropic power schedule (0xFF, 100). 00:05:44.026 INFO: Seed: 1635226007 00:05:44.026 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:05:44.026 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:05:44.026 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:05:44.026 INFO: A corpus is not provided, starting from an empty corpus 00:05:44.026 #2 INITED exec/s: 0 rss: 65Mb 00:05:44.026 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:05:44.026 This may also happen if the target rejected all inputs we tried so far 00:05:44.026 [2024-10-17 13:13:51.856713] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.026 [2024-10-17 13:13:51.856840] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.026 [2024-10-17 13:13:51.856958] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.026 [2024-10-17 13:13:51.857067] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.026 [2024-10-17 13:13:51.857308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.026 [2024-10-17 13:13:51.857339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:44.026 [2024-10-17 13:13:51.857396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.026 [2024-10-17 13:13:51.857414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:44.027 [2024-10-17 13:13:51.857471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.027 [2024-10-17 13:13:51.857485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:44.027 [2024-10-17 13:13:51.857542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.027 [2024-10-17 13:13:51.857556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:44.287 NEW_FUNC[1/715]: 0x43c4c8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:05:44.287 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:05:44.287 #4 NEW cov: 12235 ft: 12230 corp: 2/26b lim: 30 exec/s: 0 rss: 73Mb L: 25/25 MS: 2 InsertByte-InsertRepeatedBytes- 00:05:44.287 [2024-10-17 13:13:52.187378] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.287 [2024-10-17 13:13:52.187499] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.287 [2024-10-17 13:13:52.187602] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.287 [2024-10-17 13:13:52.187814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.287 [2024-10-17 13:13:52.187847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:44.287 [2024-10-17 13:13:52.187904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.287 [2024-10-17 13:13:52.187919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:44.287 [2024-10-17 13:13:52.187973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.287 [2024-10-17 13:13:52.187988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:44.287 #5 NEW cov: 12348 ft: 13424 corp: 3/45b lim: 30 exec/s: 0 rss: 73Mb L: 19/25 MS: 1 CrossOver- 00:05:44.287 [2024-10-17 13:13:52.227496] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.287 [2024-10-17 13:13:52.227608] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.287 [2024-10-17 13:13:52.227713] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.287 [2024-10-17 13:13:52.227817] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.287 [2024-10-17 13:13:52.227919] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.287 [2024-10-17 13:13:52.228125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.287 [2024-10-17 13:13:52.228154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:44.287 [2024-10-17 13:13:52.228222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.287 [2024-10-17 13:13:52.228238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:44.287 [2024-10-17 13:13:52.228289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.287 [2024-10-17 13:13:52.228307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:44.287 [2024-10-17 13:13:52.228357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.287 [2024-10-17 13:13:52.228372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:44.287 [2024-10-17 13:13:52.228424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.287 [2024-10-17 13:13:52.228438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:05:44.287 #6 NEW cov: 12354 ft: 13678 corp: 4/75b lim: 30 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 CrossOver- 00:05:44.287 [2024-10-17 13:13:52.287517] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261576) > buf size (4096) 00:05:44.287 [2024-10-17 13:13:52.287726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff7100fb cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.287 [2024-10-17 13:13:52.287753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:44.287 #7 NEW cov: 12462 ft: 14353 corp: 5/84b lim: 30 exec/s: 0 rss: 73Mb L: 9/30 MS: 1 CMP- DE: "\377q\3730\370\264\252\326"- 00:05:44.287 [2024-10-17 13:13:52.327719] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.287 [2024-10-17 13:13:52.327834] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.287 [2024-10-17 13:13:52.327942] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.287 [2024-10-17 13:13:52.328043] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.287 [2024-10-17 13:13:52.328248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.287 [2024-10-17 13:13:52.328274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:44.287 [2024-10-17 13:13:52.328323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.287 [2024-10-17 13:13:52.328338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:44.287 [2024-10-17 13:13:52.328387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.287 [2024-10-17 13:13:52.328401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:44.287 [2024-10-17 13:13:52.328449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.287 [2024-10-17 13:13:52.328462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:44.547 #8 NEW cov: 12462 ft: 14421 corp: 6/110b lim: 30 exec/s: 0 rss: 73Mb L: 26/30 MS: 1 CrossOver- 00:05:44.547 [2024-10-17 13:13:52.387785] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (797696) > buf size (4096) 00:05:44.547 [2024-10-17 13:13:52.387994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff8371 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.547 [2024-10-17 13:13:52.388019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:44.547 #9 NEW cov: 12462 ft: 14534 corp: 7/119b lim: 30 exec/s: 0 rss: 73Mb L: 9/30 MS: 1 PersAutoDict- DE: "\377q\3730\370\264\252\326"- 00:05:44.547 [2024-10-17 13:13:52.427919] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261576) > buf size (4096) 00:05:44.547 [2024-10-17 13:13:52.428120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff7100fb cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.547 [2024-10-17 13:13:52.428145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:44.547 #12 NEW cov: 12462 ft: 14593 corp: 8/130b lim: 30 exec/s: 0 rss: 73Mb L: 11/30 MS: 3 InsertByte-CrossOver-PersAutoDict- DE: "\377q\3730\370\264\252\326"- 00:05:44.547 [2024-10-17 13:13:52.468096] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.547 [2024-10-17 13:13:52.468215] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.547 [2024-10-17 13:13:52.468320] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.547 [2024-10-17 13:13:52.468526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.547 [2024-10-17 13:13:52.468551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:44.547 [2024-10-17 13:13:52.468605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.547 [2024-10-17 13:13:52.468619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:44.547 [2024-10-17 13:13:52.468668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.547 [2024-10-17 13:13:52.468683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:44.547 #13 NEW cov: 12462 ft: 14740 corp: 9/149b lim: 30 exec/s: 0 rss: 73Mb L: 19/30 MS: 1 EraseBytes- 00:05:44.547 [2024-10-17 13:13:52.528249] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.547 [2024-10-17 13:13:52.528365] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.547 [2024-10-17 13:13:52.528562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.547 [2024-10-17 13:13:52.528588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:44.547 [2024-10-17 13:13:52.528638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.547 [2024-10-17 13:13:52.528652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:44.547 #14 NEW cov: 12462 ft: 15016 corp: 10/163b lim: 30 exec/s: 0 rss: 73Mb L: 14/30 MS: 1 EraseBytes- 00:05:44.547 [2024-10-17 13:13:52.568301] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d9d9 00:05:44.547 [2024-10-17 13:13:52.568502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0ad981d9 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.547 [2024-10-17 13:13:52.568527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:44.547 #15 NEW cov: 12462 ft: 15072 corp: 11/174b lim: 30 exec/s: 0 rss: 73Mb L: 11/30 MS: 1 InsertRepeatedBytes- 00:05:44.806 [2024-10-17 13:13:52.608442] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d9fb 00:05:44.806 [2024-10-17 13:13:52.608560] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (574436) > buf size (4096) 00:05:44.806 [2024-10-17 13:13:52.608768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff71810a cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.806 [2024-10-17 13:13:52.608797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:44.806 [2024-10-17 13:13:52.608848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:30f802b4 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.806 [2024-10-17 13:13:52.608863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:44.806 #16 NEW cov: 12462 ft: 15181 corp: 12/190b lim: 30 exec/s: 0 rss: 74Mb L: 16/30 MS: 1 CrossOver- 00:05:44.806 [2024-10-17 13:13:52.668582] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261576) > buf size (4096) 00:05:44.806 [2024-10-17 13:13:52.668783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff7100fb cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.806 [2024-10-17 13:13:52.668809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:44.806 #17 NEW cov: 12462 ft: 15204 corp: 13/199b lim: 30 exec/s: 0 rss: 74Mb L: 9/30 MS: 1 ShuffleBytes- 00:05:44.806 [2024-10-17 13:13:52.728783] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261576) > buf size (4096) 00:05:44.806 [2024-10-17 13:13:52.728990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff7100fb cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.806 [2024-10-17 13:13:52.729016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:44.806 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:05:44.806 #18 NEW cov: 12485 ft: 15233 corp: 14/208b lim: 30 exec/s: 0 rss: 74Mb L: 9/30 MS: 1 ChangeByte- 00:05:44.806 [2024-10-17 13:13:52.788946] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.806 [2024-10-17 13:13:52.789057] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.806 [2024-10-17 13:13:52.789172] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:44.807 [2024-10-17 13:13:52.789377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.807 [2024-10-17 13:13:52.789402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:44.807 [2024-10-17 13:13:52.789454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.807 [2024-10-17 13:13:52.789468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:44.807 [2024-10-17 13:13:52.789517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.807 [2024-10-17 13:13:52.789531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:44.807 #19 NEW cov: 12485 ft: 15250 corp: 15/227b lim: 30 exec/s: 0 rss: 74Mb L: 19/30 MS: 1 ShuffleBytes- 00:05:44.807 [2024-10-17 13:13:52.849113] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (797696) > buf size (4096) 00:05:44.807 [2024-10-17 13:13:52.849332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff8371 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:44.807 [2024-10-17 13:13:52.849357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:45.066 #20 NEW cov: 12485 ft: 15257 corp: 16/236b lim: 30 exec/s: 20 rss: 74Mb L: 9/30 MS: 1 ChangeBit- 00:05:45.066 [2024-10-17 13:13:52.909328] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.066 [2024-10-17 13:13:52.909447] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.066 [2024-10-17 13:13:52.909663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.066 [2024-10-17 13:13:52.909689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:45.066 [2024-10-17 13:13:52.909740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.066 [2024-10-17 13:13:52.909754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:45.066 #21 NEW cov: 12485 ft: 15267 corp: 17/249b lim: 30 exec/s: 21 rss: 74Mb L: 13/30 MS: 1 EraseBytes- 00:05:45.066 [2024-10-17 13:13:52.949431] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.066 [2024-10-17 13:13:52.949547] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.066 [2024-10-17 13:13:52.949655] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.066 [2024-10-17 13:13:52.949866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.066 [2024-10-17 13:13:52.949891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:45.066 [2024-10-17 13:13:52.949944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.066 [2024-10-17 13:13:52.949958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:45.066 [2024-10-17 13:13:52.950010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.066 [2024-10-17 13:13:52.950025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:45.066 #22 NEW cov: 12485 ft: 15277 corp: 18/270b lim: 30 exec/s: 22 rss: 74Mb L: 21/30 MS: 1 CopyPart- 00:05:45.066 [2024-10-17 13:13:52.989498] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (116720) > buf size (4096) 00:05:45.066 [2024-10-17 13:13:52.989699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71fb0030 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.066 [2024-10-17 13:13:52.989724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:45.066 #23 NEW cov: 12485 ft: 15283 corp: 19/279b lim: 30 exec/s: 23 rss: 74Mb L: 9/30 MS: 1 CopyPart- 00:05:45.066 [2024-10-17 13:13:53.049820] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x4a4a 00:05:45.066 [2024-10-17 13:13:53.049935] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.066 [2024-10-17 13:13:53.050045] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.066 [2024-10-17 13:13:53.050157] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.066 [2024-10-17 13:13:53.050261] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.066 [2024-10-17 13:13:53.050463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4a004a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.066 [2024-10-17 13:13:53.050488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:45.066 [2024-10-17 13:13:53.050541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.066 [2024-10-17 13:13:53.050555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:45.066 [2024-10-17 13:13:53.050611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.066 [2024-10-17 13:13:53.050625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:45.066 [2024-10-17 13:13:53.050674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.066 [2024-10-17 13:13:53.050688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:45.066 [2024-10-17 13:13:53.050736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.066 [2024-10-17 13:13:53.050750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:05:45.066 #24 NEW cov: 12485 ft: 15290 corp: 20/309b lim: 30 exec/s: 24 rss: 74Mb L: 30/30 MS: 1 ChangeBinInt- 00:05:45.066 [2024-10-17 13:13:53.109905] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200002d4a 00:05:45.066 [2024-10-17 13:13:53.110018] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.066 [2024-10-17 13:13:53.110219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.066 [2024-10-17 13:13:53.110245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:45.066 [2024-10-17 13:13:53.110295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.066 [2024-10-17 13:13:53.110309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:45.326 #25 NEW cov: 12485 ft: 15301 corp: 21/323b lim: 30 exec/s: 25 rss: 74Mb L: 14/30 MS: 1 InsertByte- 00:05:45.326 [2024-10-17 13:13:53.170114] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.326 [2024-10-17 13:13:53.170253] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.326 [2024-10-17 13:13:53.170362] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.326 [2024-10-17 13:13:53.170467] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.326 [2024-10-17 13:13:53.170670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.326 [2024-10-17 13:13:53.170696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:45.326 [2024-10-17 13:13:53.170748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.326 [2024-10-17 13:13:53.170762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:45.326 [2024-10-17 13:13:53.170811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.326 [2024-10-17 13:13:53.170825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:45.326 [2024-10-17 13:13:53.170874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.326 [2024-10-17 13:13:53.170888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:45.326 #26 NEW cov: 12485 ft: 15312 corp: 22/348b lim: 30 exec/s: 26 rss: 74Mb L: 25/30 MS: 1 CrossOver- 00:05:45.326 [2024-10-17 13:13:53.210142] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d9fb 00:05:45.326 [2024-10-17 13:13:53.210262] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xaad6 00:05:45.326 [2024-10-17 13:13:53.210473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff71810a cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.326 [2024-10-17 13:13:53.210497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:45.326 [2024-10-17 13:13:53.210549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:302200f8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.326 [2024-10-17 13:13:53.210563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:45.326 #27 NEW cov: 12485 ft: 15329 corp: 23/365b lim: 30 exec/s: 27 rss: 75Mb L: 17/30 MS: 1 InsertByte- 00:05:45.326 [2024-10-17 13:13:53.270351] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.326 [2024-10-17 13:13:53.270463] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.326 [2024-10-17 13:13:53.270572] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.326 [2024-10-17 13:13:53.270773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a4a020f cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.326 [2024-10-17 13:13:53.270798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:45.326 [2024-10-17 13:13:53.270847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.326 [2024-10-17 13:13:53.270862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:45.326 [2024-10-17 13:13:53.270911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.326 [2024-10-17 13:13:53.270925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:45.326 #28 NEW cov: 12485 ft: 15339 corp: 24/387b lim: 30 exec/s: 28 rss: 75Mb L: 22/30 MS: 1 InsertByte- 00:05:45.327 [2024-10-17 13:13:53.330483] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d9d9 00:05:45.327 [2024-10-17 13:13:53.330688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0ad981d9 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.327 [2024-10-17 13:13:53.330712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:45.327 #29 NEW cov: 12485 ft: 15359 corp: 25/398b lim: 30 exec/s: 29 rss: 75Mb L: 11/30 MS: 1 ChangeBinInt- 00:05:45.586 [2024-10-17 13:13:53.390685] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.586 [2024-10-17 13:13:53.390802] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.586 [2024-10-17 13:13:53.390910] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.586 [2024-10-17 13:13:53.391113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.586 [2024-10-17 13:13:53.391139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:45.586 [2024-10-17 13:13:53.391191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.586 [2024-10-17 13:13:53.391207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:45.586 [2024-10-17 13:13:53.391261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.586 [2024-10-17 13:13:53.391275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:45.586 #30 NEW cov: 12485 ft: 15367 corp: 26/417b lim: 30 exec/s: 30 rss: 75Mb L: 19/30 MS: 1 ShuffleBytes- 00:05:45.586 [2024-10-17 13:13:53.450820] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261576) > buf size (4096) 00:05:45.586 [2024-10-17 13:13:53.451027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff7100fb cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.586 [2024-10-17 13:13:53.451052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:45.586 #31 NEW cov: 12485 ft: 15378 corp: 27/428b lim: 30 exec/s: 31 rss: 75Mb L: 11/30 MS: 1 ChangeBit- 00:05:45.586 [2024-10-17 13:13:53.490941] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.586 [2024-10-17 13:13:53.491057] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.586 [2024-10-17 13:13:53.491174] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200009c4a 00:05:45.586 [2024-10-17 13:13:53.491383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a4a020f cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.586 [2024-10-17 13:13:53.491408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:45.586 [2024-10-17 13:13:53.491459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.587 [2024-10-17 13:13:53.491473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:45.587 [2024-10-17 13:13:53.491524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.587 [2024-10-17 13:13:53.491538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:45.587 #32 NEW cov: 12485 ft: 15394 corp: 28/451b lim: 30 exec/s: 32 rss: 75Mb L: 23/30 MS: 1 InsertByte- 00:05:45.587 [2024-10-17 13:13:53.551122] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d9d9 00:05:45.587 [2024-10-17 13:13:53.551242] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d9d9 00:05:45.587 [2024-10-17 13:13:53.551347] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d9d9 00:05:45.587 [2024-10-17 13:13:53.551546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0ad981d9 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.587 [2024-10-17 13:13:53.551571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:45.587 [2024-10-17 13:13:53.551624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d9d981d9 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.587 [2024-10-17 13:13:53.551639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:45.587 [2024-10-17 13:13:53.551690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d9d981d9 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.587 [2024-10-17 13:13:53.551704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:45.587 #33 NEW cov: 12485 ft: 15418 corp: 29/470b lim: 30 exec/s: 33 rss: 75Mb L: 19/30 MS: 1 CopyPart- 00:05:45.587 [2024-10-17 13:13:53.591172] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (797696) > buf size (4096) 00:05:45.587 [2024-10-17 13:13:53.591383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff8371 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.587 [2024-10-17 13:13:53.591408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:45.587 #34 NEW cov: 12485 ft: 15450 corp: 30/479b lim: 30 exec/s: 34 rss: 75Mb L: 9/30 MS: 1 ChangeByte- 00:05:45.846 [2024-10-17 13:13:53.651360] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (523720) > buf size (4096) 00:05:45.846 [2024-10-17 13:13:53.651571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff7181ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.846 [2024-10-17 13:13:53.651596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:45.846 #35 NEW cov: 12485 ft: 15462 corp: 31/490b lim: 30 exec/s: 35 rss: 75Mb L: 11/30 MS: 1 CopyPart- 00:05:45.846 [2024-10-17 13:13:53.711665] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000524a 00:05:45.846 [2024-10-17 13:13:53.711781] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.846 [2024-10-17 13:13:53.711890] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.846 [2024-10-17 13:13:53.711992] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.846 [2024-10-17 13:13:53.712098] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.847 [2024-10-17 13:13:53.712324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.847 [2024-10-17 13:13:53.712349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:45.847 [2024-10-17 13:13:53.712401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.847 [2024-10-17 13:13:53.712415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:45.847 [2024-10-17 13:13:53.712467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.847 [2024-10-17 13:13:53.712481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:45.847 [2024-10-17 13:13:53.712534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.847 [2024-10-17 13:13:53.712547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:45.847 [2024-10-17 13:13:53.712598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.847 [2024-10-17 13:13:53.712612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:05:45.847 #36 NEW cov: 12485 ft: 15465 corp: 32/520b lim: 30 exec/s: 36 rss: 75Mb L: 30/30 MS: 1 ChangeBinInt- 00:05:45.847 [2024-10-17 13:13:53.751652] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200002d0e 00:05:45.847 [2024-10-17 13:13:53.751766] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.847 [2024-10-17 13:13:53.751969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.847 [2024-10-17 13:13:53.751996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:45.847 [2024-10-17 13:13:53.752048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.847 [2024-10-17 13:13:53.752067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:45.847 #37 NEW cov: 12485 ft: 15521 corp: 33/534b lim: 30 exec/s: 37 rss: 75Mb L: 14/30 MS: 1 ChangeBinInt- 00:05:45.847 [2024-10-17 13:13:53.811856] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200002d0e 00:05:45.847 [2024-10-17 13:13:53.811972] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:05:45.847 [2024-10-17 13:13:53.812180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.847 [2024-10-17 13:13:53.812207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:45.847 [2024-10-17 13:13:53.812258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:45.847 [2024-10-17 13:13:53.812273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:45.847 #38 NEW cov: 12485 ft: 15524 corp: 34/548b lim: 30 exec/s: 19 rss: 75Mb L: 14/30 MS: 1 ChangeBinInt- 00:05:45.847 #38 DONE cov: 12485 ft: 15524 corp: 34/548b lim: 30 exec/s: 19 rss: 75Mb 00:05:45.847 ###### Recommended dictionary. ###### 00:05:45.847 "\377q\3730\370\264\252\326" # Uses: 2 00:05:45.847 ###### End of recommended dictionary. ###### 00:05:45.847 Done 38 runs in 2 second(s) 00:05:46.106 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:05:46.106 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:05:46.106 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:05:46.106 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:05:46.106 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:05:46.106 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:05:46.106 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:05:46.106 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:05:46.106 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:05:46.106 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:05:46.106 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:05:46.106 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:05:46.106 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:05:46.106 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:05:46.106 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:05:46.106 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:05:46.106 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:05:46.107 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:05:46.107 13:13:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:05:46.107 [2024-10-17 13:13:53.996940] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:46.107 [2024-10-17 13:13:53.997005] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840625 ] 00:05:46.366 [2024-10-17 13:13:54.177882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.366 [2024-10-17 13:13:54.212242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.366 [2024-10-17 13:13:54.271594] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:46.366 [2024-10-17 13:13:54.287978] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:05:46.366 INFO: Running with entropic power schedule (0xFF, 100). 00:05:46.366 INFO: Seed: 4117235597 00:05:46.366 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:05:46.366 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:05:46.366 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:05:46.366 INFO: A corpus is not provided, starting from an empty corpus 00:05:46.366 #2 INITED exec/s: 0 rss: 66Mb 00:05:46.367 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:05:46.367 This may also happen if the target rejected all inputs we tried so far 00:05:46.367 [2024-10-17 13:13:54.354532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:46.367 [2024-10-17 13:13:54.354572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:46.626 NEW_FUNC[1/714]: 0x43ef78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:05:46.626 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:05:46.626 #3 NEW cov: 12186 ft: 12190 corp: 2/10b lim: 35 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:05:46.885 [2024-10-17 13:13:54.705437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ff24 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:46.885 [2024-10-17 13:13:54.705484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:46.885 #4 NEW cov: 12304 ft: 12762 corp: 3/19b lim: 35 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ChangeByte- 00:05:46.885 [2024-10-17 13:13:54.776375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:46.885 [2024-10-17 13:13:54.776407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:46.885 [2024-10-17 13:13:54.776528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:46.885 [2024-10-17 13:13:54.776547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:46.885 [2024-10-17 13:13:54.776671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:46.885 [2024-10-17 13:13:54.776689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:46.885 [2024-10-17 13:13:54.776820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:46.885 [2024-10-17 13:13:54.776839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:46.885 [2024-10-17 13:13:54.776961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ff2400ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:46.885 [2024-10-17 13:13:54.776980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:05:46.885 #5 NEW cov: 12310 ft: 13620 corp: 4/54b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:05:46.885 [2024-10-17 13:13:54.845589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:46.885 [2024-10-17 13:13:54.845617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:46.885 #6 NEW cov: 12395 ft: 13884 corp: 5/63b lim: 35 exec/s: 0 rss: 73Mb L: 9/35 MS: 1 ChangeBinInt- 00:05:46.885 [2024-10-17 13:13:54.895590] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:46.885 [2024-10-17 13:13:54.895939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6000002c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:46.885 [2024-10-17 13:13:54.895968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:46.885 [2024-10-17 13:13:54.896090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:46.885 [2024-10-17 13:13:54.896112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:46.885 #11 NEW cov: 12406 ft: 14209 corp: 6/78b lim: 35 exec/s: 0 rss: 73Mb L: 15/35 MS: 5 ChangeByte-ChangeBit-ChangeBit-InsertByte-InsertRepeatedBytes- 00:05:47.145 [2024-10-17 13:13:54.946797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff0a000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.145 [2024-10-17 13:13:54.946827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:47.145 [2024-10-17 13:13:54.946960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.145 [2024-10-17 13:13:54.946979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:47.145 [2024-10-17 13:13:54.947102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.145 [2024-10-17 13:13:54.947121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:47.145 [2024-10-17 13:13:54.947253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.145 [2024-10-17 13:13:54.947271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:47.145 [2024-10-17 13:13:54.947396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ff2400ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.145 [2024-10-17 13:13:54.947414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:05:47.145 #12 NEW cov: 12406 ft: 14304 corp: 7/113b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 CrossOver- 00:05:47.145 [2024-10-17 13:13:55.016156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.145 [2024-10-17 13:13:55.016185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:47.145 #13 NEW cov: 12406 ft: 14398 corp: 8/122b lim: 35 exec/s: 0 rss: 74Mb L: 9/35 MS: 1 ShuffleBytes- 00:05:47.145 [2024-10-17 13:13:55.086960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6000002c cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.145 [2024-10-17 13:13:55.086988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:47.145 [2024-10-17 13:13:55.087121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0400ff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.145 [2024-10-17 13:13:55.087140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:47.145 [2024-10-17 13:13:55.087268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.145 [2024-10-17 13:13:55.087286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:47.145 #14 NEW cov: 12406 ft: 14780 corp: 9/146b lim: 35 exec/s: 0 rss: 74Mb L: 24/35 MS: 1 CrossOver- 00:05:47.145 [2024-10-17 13:13:55.156734] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:47.145 [2024-10-17 13:13:55.157113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6000002c cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.145 [2024-10-17 13:13:55.157143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:47.145 [2024-10-17 13:13:55.157270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0400ff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.145 [2024-10-17 13:13:55.157290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:47.145 [2024-10-17 13:13:55.157413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.145 [2024-10-17 13:13:55.157436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:47.405 #15 NEW cov: 12406 ft: 14877 corp: 10/168b lim: 35 exec/s: 0 rss: 74Mb L: 22/35 MS: 1 EraseBytes- 00:05:47.405 [2024-10-17 13:13:55.226815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ff7f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.405 [2024-10-17 13:13:55.226846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:47.405 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:05:47.405 #16 NEW cov: 12429 ft: 15001 corp: 11/177b lim: 35 exec/s: 0 rss: 74Mb L: 9/35 MS: 1 ChangeBit- 00:05:47.405 [2024-10-17 13:13:55.277014] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:47.405 [2024-10-17 13:13:55.277175] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:47.405 [2024-10-17 13:13:55.277496] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:47.405 [2024-10-17 13:13:55.277829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6000002c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.405 [2024-10-17 13:13:55.277858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:47.405 [2024-10-17 13:13:55.277977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.405 [2024-10-17 13:13:55.278003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:47.405 [2024-10-17 13:13:55.278126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ff00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.405 [2024-10-17 13:13:55.278147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:47.405 [2024-10-17 13:13:55.278278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.405 [2024-10-17 13:13:55.278300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:47.405 [2024-10-17 13:13:55.278428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.405 [2024-10-17 13:13:55.278452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:05:47.405 #22 NEW cov: 12429 ft: 15105 corp: 12/212b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:05:47.405 [2024-10-17 13:13:55.347198] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:47.405 [2024-10-17 13:13:55.347505] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:47.405 [2024-10-17 13:13:55.347843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6000002c cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.405 [2024-10-17 13:13:55.347872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:47.405 [2024-10-17 13:13:55.347999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.405 [2024-10-17 13:13:55.348018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:47.405 [2024-10-17 13:13:55.348153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:000400ff cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.405 [2024-10-17 13:13:55.348173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:47.405 [2024-10-17 13:13:55.348292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.405 [2024-10-17 13:13:55.348312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:47.405 #23 NEW cov: 12429 ft: 15178 corp: 13/240b lim: 35 exec/s: 23 rss: 74Mb L: 28/35 MS: 1 InsertRepeatedBytes- 00:05:47.405 [2024-10-17 13:13:55.397877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff0a000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.405 [2024-10-17 13:13:55.397906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:47.405 [2024-10-17 13:13:55.398027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.405 [2024-10-17 13:13:55.398048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:47.405 [2024-10-17 13:13:55.398168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.405 [2024-10-17 13:13:55.398186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:47.405 [2024-10-17 13:13:55.398317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ff24 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.405 [2024-10-17 13:13:55.398335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:47.405 #24 NEW cov: 12429 ft: 15212 corp: 14/270b lim: 35 exec/s: 24 rss: 74Mb L: 30/35 MS: 1 EraseBytes- 00:05:47.664 [2024-10-17 13:13:55.467516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aff002f cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.664 [2024-10-17 13:13:55.467550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:47.664 #25 NEW cov: 12429 ft: 15252 corp: 15/280b lim: 35 exec/s: 25 rss: 74Mb L: 10/35 MS: 1 InsertByte- 00:05:47.664 [2024-10-17 13:13:55.537978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.664 [2024-10-17 13:13:55.538005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:47.664 [2024-10-17 13:13:55.538129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff0400ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.664 [2024-10-17 13:13:55.538149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:47.664 #26 NEW cov: 12429 ft: 15260 corp: 16/297b lim: 35 exec/s: 26 rss: 74Mb L: 17/35 MS: 1 CopyPart- 00:05:47.664 [2024-10-17 13:13:55.578427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.664 [2024-10-17 13:13:55.578453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:47.664 [2024-10-17 13:13:55.578572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.664 [2024-10-17 13:13:55.578589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:47.664 [2024-10-17 13:13:55.578715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.664 [2024-10-17 13:13:55.578731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:47.664 [2024-10-17 13:13:55.578855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.664 [2024-10-17 13:13:55.578873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:47.664 #27 NEW cov: 12429 ft: 15298 corp: 17/330b lim: 35 exec/s: 27 rss: 74Mb L: 33/35 MS: 1 EraseBytes- 00:05:47.664 [2024-10-17 13:13:55.627999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00dfff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.664 [2024-10-17 13:13:55.628028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:47.664 #28 NEW cov: 12429 ft: 15311 corp: 18/339b lim: 35 exec/s: 28 rss: 74Mb L: 9/35 MS: 1 ChangeBit- 00:05:47.664 [2024-10-17 13:13:55.679050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.664 [2024-10-17 13:13:55.679078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:47.664 [2024-10-17 13:13:55.679205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.664 [2024-10-17 13:13:55.679220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:47.664 [2024-10-17 13:13:55.679344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.664 [2024-10-17 13:13:55.679361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:47.664 [2024-10-17 13:13:55.679489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.664 [2024-10-17 13:13:55.679512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:47.664 [2024-10-17 13:13:55.679635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ff2400ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.664 [2024-10-17 13:13:55.679656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:05:47.664 #29 NEW cov: 12429 ft: 15317 corp: 19/374b lim: 35 exec/s: 29 rss: 74Mb L: 35/35 MS: 1 ShuffleBytes- 00:05:47.923 [2024-10-17 13:13:55.728526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6000002c cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.923 [2024-10-17 13:13:55.728557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:47.923 [2024-10-17 13:13:55.728680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.923 [2024-10-17 13:13:55.728697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:47.923 #30 NEW cov: 12429 ft: 15341 corp: 20/389b lim: 35 exec/s: 30 rss: 74Mb L: 15/35 MS: 1 EraseBytes- 00:05:47.923 [2024-10-17 13:13:55.779380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.923 [2024-10-17 13:13:55.779409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:47.923 [2024-10-17 13:13:55.779540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.923 [2024-10-17 13:13:55.779560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:47.923 [2024-10-17 13:13:55.779684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.923 [2024-10-17 13:13:55.779701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:47.923 [2024-10-17 13:13:55.779829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.923 [2024-10-17 13:13:55.779847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:47.923 [2024-10-17 13:13:55.779970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ff2400ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.923 [2024-10-17 13:13:55.779989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:05:47.923 #31 NEW cov: 12429 ft: 15378 corp: 21/424b lim: 35 exec/s: 31 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:05:47.923 [2024-10-17 13:13:55.848636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.923 [2024-10-17 13:13:55.848664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:47.923 #32 NEW cov: 12429 ft: 15388 corp: 22/433b lim: 35 exec/s: 32 rss: 74Mb L: 9/35 MS: 1 ChangeBinInt- 00:05:47.923 [2024-10-17 13:13:55.898756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff24000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.923 [2024-10-17 13:13:55.898785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:47.923 #33 NEW cov: 12429 ft: 15414 corp: 23/442b lim: 35 exec/s: 33 rss: 74Mb L: 9/35 MS: 1 ShuffleBytes- 00:05:47.923 [2024-10-17 13:13:55.948941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:0a00ff04 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:47.923 [2024-10-17 13:13:55.948969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:47.923 #34 NEW cov: 12429 ft: 15416 corp: 24/451b lim: 35 exec/s: 34 rss: 74Mb L: 9/35 MS: 1 ShuffleBytes- 00:05:48.182 [2024-10-17 13:13:55.999999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:48.182 [2024-10-17 13:13:56.000029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:48.182 [2024-10-17 13:13:56.000149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:48.182 [2024-10-17 13:13:56.000171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:48.182 [2024-10-17 13:13:56.000301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:48.182 [2024-10-17 13:13:56.000320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:48.182 [2024-10-17 13:13:56.000446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:48.182 [2024-10-17 13:13:56.000465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:48.182 [2024-10-17 13:13:56.000582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff0024ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:48.182 [2024-10-17 13:13:56.000601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:05:48.182 #35 NEW cov: 12429 ft: 15422 corp: 25/486b lim: 35 exec/s: 35 rss: 74Mb L: 35/35 MS: 1 ShuffleBytes- 00:05:48.182 [2024-10-17 13:13:56.049350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:48.182 [2024-10-17 13:13:56.049378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:48.182 #36 NEW cov: 12429 ft: 15439 corp: 26/495b lim: 35 exec/s: 36 rss: 74Mb L: 9/35 MS: 1 ChangeBinInt- 00:05:48.182 [2024-10-17 13:13:56.099390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:0a00ff04 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:48.182 [2024-10-17 13:13:56.099420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:48.182 #37 NEW cov: 12429 ft: 15601 corp: 27/506b lim: 35 exec/s: 37 rss: 75Mb L: 11/35 MS: 1 CMP- DE: "\367\000"- 00:05:48.182 [2024-10-17 13:13:56.169820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6000002c cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:48.182 [2024-10-17 13:13:56.169848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:48.182 [2024-10-17 13:13:56.169972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:48.182 [2024-10-17 13:13:56.169990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:48.182 #38 NEW cov: 12429 ft: 15635 corp: 28/521b lim: 35 exec/s: 38 rss: 75Mb L: 15/35 MS: 1 CopyPart- 00:05:48.442 [2024-10-17 13:13:56.239780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c6c600c6 cdw11:c600c6c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:48.442 [2024-10-17 13:13:56.239813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:48.442 #40 NEW cov: 12429 ft: 15662 corp: 29/533b lim: 35 exec/s: 40 rss: 75Mb L: 12/35 MS: 2 EraseBytes-InsertRepeatedBytes- 00:05:48.442 [2024-10-17 13:13:56.310472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6000002c cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:48.442 [2024-10-17 13:13:56.310499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:48.442 [2024-10-17 13:13:56.310608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:010000ff cdw11:040000fa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:48.442 [2024-10-17 13:13:56.310627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:48.442 [2024-10-17 13:13:56.310745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:48.442 [2024-10-17 13:13:56.310764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:48.442 #41 NEW cov: 12429 ft: 15672 corp: 30/557b lim: 35 exec/s: 20 rss: 75Mb L: 24/35 MS: 1 ChangeBinInt- 00:05:48.442 #41 DONE cov: 12429 ft: 15672 corp: 30/557b lim: 35 exec/s: 20 rss: 75Mb 00:05:48.442 ###### Recommended dictionary. ###### 00:05:48.442 "\367\000" # Uses: 0 00:05:48.442 ###### End of recommended dictionary. ###### 00:05:48.442 Done 41 runs in 2 second(s) 00:05:48.442 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:05:48.442 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:05:48.442 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:05:48.442 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:05:48.442 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:05:48.442 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:05:48.442 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:05:48.442 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:05:48.442 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:05:48.442 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:05:48.442 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:05:48.442 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:05:48.442 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:05:48.442 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:05:48.442 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:05:48.442 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:05:48.442 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:05:48.442 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:05:48.442 13:13:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:05:48.442 [2024-10-17 13:13:56.483525] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:48.442 [2024-10-17 13:13:56.483597] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3841154 ] 00:05:48.701 [2024-10-17 13:13:56.665241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.701 [2024-10-17 13:13:56.698021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.960 [2024-10-17 13:13:56.756751] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:48.960 [2024-10-17 13:13:56.773060] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:05:48.960 INFO: Running with entropic power schedule (0xFF, 100). 00:05:48.960 INFO: Seed: 2305244862 00:05:48.960 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:05:48.960 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:05:48.960 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:05:48.960 INFO: A corpus is not provided, starting from an empty corpus 00:05:48.960 #2 INITED exec/s: 0 rss: 65Mb 00:05:48.960 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:05:48.960 This may also happen if the target rejected all inputs we tried so far 00:05:49.220 NEW_FUNC[1/706]: 0x440c58 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:05:49.220 NEW_FUNC[2/706]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:05:49.220 #6 NEW cov: 12157 ft: 12143 corp: 2/5b lim: 20 exec/s: 0 rss: 73Mb L: 4/4 MS: 4 ShuffleBytes-ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:05:49.220 NEW_FUNC[1/1]: 0x17a13d8 in nvme_ctrlr_get_ready_timeout /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:1288 00:05:49.220 #7 NEW cov: 12275 ft: 12743 corp: 3/10b lim: 20 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:05:49.220 #8 NEW cov: 12281 ft: 12969 corp: 4/15b lim: 20 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:05:49.479 #9 NEW cov: 12366 ft: 13210 corp: 5/20b lim: 20 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 ChangeBit- 00:05:49.479 #10 NEW cov: 12366 ft: 13486 corp: 6/24b lim: 20 exec/s: 0 rss: 73Mb L: 4/5 MS: 1 ChangeBinInt- 00:05:49.479 #11 NEW cov: 12366 ft: 13556 corp: 7/28b lim: 20 exec/s: 0 rss: 73Mb L: 4/5 MS: 1 ChangeBit- 00:05:49.479 #19 NEW cov: 12372 ft: 13647 corp: 8/34b lim: 20 exec/s: 0 rss: 73Mb L: 6/6 MS: 3 EraseBytes-ChangeBit-CrossOver- 00:05:49.479 #20 NEW cov: 12372 ft: 13680 corp: 9/39b lim: 20 exec/s: 0 rss: 73Mb L: 5/6 MS: 1 InsertByte- 00:05:49.479 #21 NEW cov: 12389 ft: 14111 corp: 10/51b lim: 20 exec/s: 0 rss: 73Mb L: 12/12 MS: 1 CMP- DE: "0\012\247\3773\373r\000"- 00:05:49.737 #24 NEW cov: 12389 ft: 14159 corp: 11/58b lim: 20 exec/s: 0 rss: 74Mb L: 7/12 MS: 3 ChangeBit-CopyPart-InsertRepeatedBytes- 00:05:49.737 #25 NEW cov: 12405 ft: 14388 corp: 12/78b lim: 20 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 PersAutoDict- DE: "0\012\247\3773\373r\000"- 00:05:49.737 #26 NEW cov: 12405 ft: 14429 corp: 13/90b lim: 20 exec/s: 0 rss: 74Mb L: 12/20 MS: 1 ChangeASCIIInt- 00:05:49.737 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:05:49.737 #27 NEW cov: 12428 ft: 14473 corp: 14/110b lim: 20 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 PersAutoDict- DE: "0\012\247\3773\373r\000"- 00:05:49.737 #28 NEW cov: 12428 ft: 14492 corp: 15/114b lim: 20 exec/s: 0 rss: 74Mb L: 4/20 MS: 1 CopyPart- 00:05:49.997 #29 NEW cov: 12428 ft: 14536 corp: 16/119b lim: 20 exec/s: 29 rss: 74Mb L: 5/20 MS: 1 ChangeBit- 00:05:49.997 #30 NEW cov: 12428 ft: 14582 corp: 17/123b lim: 20 exec/s: 30 rss: 74Mb L: 4/20 MS: 1 ShuffleBytes- 00:05:49.997 #31 NEW cov: 12428 ft: 14609 corp: 18/128b lim: 20 exec/s: 31 rss: 74Mb L: 5/20 MS: 1 ChangeBit- 00:05:49.997 [2024-10-17 13:13:57.934944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:05:49.997 [2024-10-17 13:13:57.934980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:49.997 NEW_FUNC[1/15]: 0x1829d08 in nvme_ctrlr_queue_async_event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3287 00:05:49.997 NEW_FUNC[2/15]: 0x184eea8 in nvme_ctrlr_process_async_event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3247 00:05:49.997 #32 NEW cov: 12648 ft: 15044 corp: 19/137b lim: 20 exec/s: 32 rss: 74Mb L: 9/20 MS: 1 CrossOver- 00:05:49.997 #33 NEW cov: 12648 ft: 15098 corp: 20/141b lim: 20 exec/s: 33 rss: 74Mb L: 4/20 MS: 1 ShuffleBytes- 00:05:50.255 #36 NEW cov: 12648 ft: 15120 corp: 21/145b lim: 20 exec/s: 36 rss: 74Mb L: 4/20 MS: 3 EraseBytes-ShuffleBytes-InsertByte- 00:05:50.255 #37 NEW cov: 12648 ft: 15159 corp: 22/150b lim: 20 exec/s: 37 rss: 74Mb L: 5/20 MS: 1 InsertByte- 00:05:50.255 #38 NEW cov: 12648 ft: 15167 corp: 23/156b lim: 20 exec/s: 38 rss: 75Mb L: 6/20 MS: 1 CopyPart- 00:05:50.255 #39 NEW cov: 12648 ft: 15171 corp: 24/162b lim: 20 exec/s: 39 rss: 75Mb L: 6/20 MS: 1 ShuffleBytes- 00:05:50.255 [2024-10-17 13:13:58.276148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.255 [2024-10-17 13:13:58.276180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:50.515 #40 NEW cov: 12649 ft: 15234 corp: 25/178b lim: 20 exec/s: 40 rss: 75Mb L: 16/20 MS: 1 InsertRepeatedBytes- 00:05:50.515 #41 NEW cov: 12649 ft: 15250 corp: 26/183b lim: 20 exec/s: 41 rss: 75Mb L: 5/20 MS: 1 CrossOver- 00:05:50.515 #42 NEW cov: 12649 ft: 15264 corp: 27/195b lim: 20 exec/s: 42 rss: 75Mb L: 12/20 MS: 1 ChangeBinInt- 00:05:50.515 [2024-10-17 13:13:58.436565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.515 [2024-10-17 13:13:58.436592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:50.515 NEW_FUNC[1/1]: 0x155b0b8 in _nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3649 00:05:50.515 #43 NEW cov: 12676 ft: 15385 corp: 28/209b lim: 20 exec/s: 43 rss: 75Mb L: 14/20 MS: 1 InsertRepeatedBytes- 00:05:50.515 #44 NEW cov: 12676 ft: 15395 corp: 29/215b lim: 20 exec/s: 44 rss: 75Mb L: 6/20 MS: 1 CopyPart- 00:05:50.515 [2024-10-17 13:13:58.516813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.515 [2024-10-17 13:13:58.516840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:50.515 #45 NEW cov: 12676 ft: 15426 corp: 30/232b lim: 20 exec/s: 45 rss: 75Mb L: 17/20 MS: 1 InsertByte- 00:05:50.774 #46 NEW cov: 12676 ft: 15457 corp: 31/241b lim: 20 exec/s: 46 rss: 75Mb L: 9/20 MS: 1 CrossOver- 00:05:50.774 #47 NEW cov: 12676 ft: 15474 corp: 32/246b lim: 20 exec/s: 47 rss: 75Mb L: 5/20 MS: 1 CopyPart- 00:05:50.774 #48 NEW cov: 12676 ft: 15486 corp: 33/259b lim: 20 exec/s: 48 rss: 75Mb L: 13/20 MS: 1 PersAutoDict- DE: "0\012\247\3773\373r\000"- 00:05:50.774 #49 NEW cov: 12676 ft: 15514 corp: 34/272b lim: 20 exec/s: 49 rss: 75Mb L: 13/20 MS: 1 ChangeByte- 00:05:50.774 [2024-10-17 13:13:58.777503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:05:50.774 [2024-10-17 13:13:58.777532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:50.774 #50 NEW cov: 12676 ft: 15518 corp: 35/286b lim: 20 exec/s: 25 rss: 75Mb L: 14/20 MS: 1 CrossOver- 00:05:50.774 #50 DONE cov: 12676 ft: 15518 corp: 35/286b lim: 20 exec/s: 25 rss: 75Mb 00:05:50.774 ###### Recommended dictionary. ###### 00:05:50.774 "0\012\247\3773\373r\000" # Uses: 3 00:05:50.774 ###### End of recommended dictionary. ###### 00:05:50.774 Done 50 runs in 2 second(s) 00:05:51.033 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:05:51.033 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:05:51.033 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:05:51.033 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:05:51.033 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:05:51.034 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:05:51.034 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:05:51.034 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:05:51.034 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:05:51.034 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:05:51.034 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:05:51.034 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:05:51.034 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:05:51.034 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:05:51.034 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:05:51.034 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:05:51.034 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:05:51.034 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:05:51.034 13:13:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:05:51.034 [2024-10-17 13:13:58.971589] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:51.034 [2024-10-17 13:13:58.971660] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3841539 ] 00:05:51.293 [2024-10-17 13:13:59.155848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.293 [2024-10-17 13:13:59.194023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.293 [2024-10-17 13:13:59.253438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:51.293 [2024-10-17 13:13:59.269805] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:05:51.293 INFO: Running with entropic power schedule (0xFF, 100). 00:05:51.293 INFO: Seed: 509300800 00:05:51.293 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:05:51.293 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:05:51.293 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:05:51.293 INFO: A corpus is not provided, starting from an empty corpus 00:05:51.293 #2 INITED exec/s: 0 rss: 65Mb 00:05:51.293 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:05:51.293 This may also happen if the target rejected all inputs we tried so far 00:05:51.552 [2024-10-17 13:13:59.346713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:06060a06 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:51.552 [2024-10-17 13:13:59.346752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.552 [2024-10-17 13:13:59.346869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:51.552 [2024-10-17 13:13:59.346887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:51.552 [2024-10-17 13:13:59.347011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:51.552 [2024-10-17 13:13:59.347033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:51.812 NEW_FUNC[1/715]: 0x441d58 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:05:51.812 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:05:51.812 #23 NEW cov: 12193 ft: 12193 corp: 2/24b lim: 35 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:05:51.812 [2024-10-17 13:13:59.677212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:20000a4c cdw11:dc3a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:51.812 [2024-10-17 13:13:59.677249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.812 #27 NEW cov: 12324 ft: 13558 corp: 3/33b lim: 35 exec/s: 0 rss: 73Mb L: 9/23 MS: 4 ShuffleBytes-CopyPart-EraseBytes-CMP- DE: "L \000\334:\177\000\000"- 00:05:51.812 [2024-10-17 13:13:59.727900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:06060a06 cdw11:06060002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:51.812 [2024-10-17 13:13:59.727929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.812 [2024-10-17 13:13:59.728049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:dc3a2000 cdw11:7f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:51.812 [2024-10-17 13:13:59.728067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:51.812 [2024-10-17 13:13:59.728197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:51.812 [2024-10-17 13:13:59.728216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:51.812 #33 NEW cov: 12330 ft: 13760 corp: 4/56b lim: 35 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 PersAutoDict- DE: "L \000\334:\177\000\000"- 00:05:51.812 [2024-10-17 13:13:59.797761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:51.812 [2024-10-17 13:13:59.797790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.812 [2024-10-17 13:13:59.797925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:51.812 [2024-10-17 13:13:59.797945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:51.812 #36 NEW cov: 12415 ft: 14224 corp: 5/73b lim: 35 exec/s: 0 rss: 73Mb L: 17/23 MS: 3 ChangeByte-ChangeBit-InsertRepeatedBytes- 00:05:51.812 [2024-10-17 13:13:59.848339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:51.812 [2024-10-17 13:13:59.848368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:51.812 [2024-10-17 13:13:59.848506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:51.812 [2024-10-17 13:13:59.848526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:51.812 [2024-10-17 13:13:59.848650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:51.812 [2024-10-17 13:13:59.848669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:52.071 #37 NEW cov: 12415 ft: 14354 corp: 6/100b lim: 35 exec/s: 0 rss: 73Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:05:52.071 [2024-10-17 13:13:59.898191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.071 [2024-10-17 13:13:59.898218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.072 [2024-10-17 13:13:59.898345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffff02 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.072 [2024-10-17 13:13:59.898364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.072 #38 NEW cov: 12415 ft: 14458 corp: 7/117b lim: 35 exec/s: 0 rss: 73Mb L: 17/27 MS: 1 ChangeBinInt- 00:05:52.072 [2024-10-17 13:13:59.968763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:06060a06 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.072 [2024-10-17 13:13:59.968792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.072 [2024-10-17 13:13:59.968923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.072 [2024-10-17 13:13:59.968940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.072 [2024-10-17 13:13:59.969061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.072 [2024-10-17 13:13:59.969095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:52.072 #44 NEW cov: 12415 ft: 14511 corp: 8/140b lim: 35 exec/s: 0 rss: 73Mb L: 23/27 MS: 1 ChangeByte- 00:05:52.072 [2024-10-17 13:14:00.018981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:06060a06 cdw11:06060002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.072 [2024-10-17 13:14:00.019009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.072 [2024-10-17 13:14:00.019140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:dc3a2000 cdw11:7f000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.072 [2024-10-17 13:14:00.019162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.072 [2024-10-17 13:14:00.019293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:dc3a2000 cdw11:7f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.072 [2024-10-17 13:14:00.019310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:52.072 #45 NEW cov: 12415 ft: 14572 corp: 9/163b lim: 35 exec/s: 0 rss: 73Mb L: 23/27 MS: 1 PersAutoDict- DE: "L \000\334:\177\000\000"- 00:05:52.072 [2024-10-17 13:14:00.089598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:06060a06 cdw11:06060002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.072 [2024-10-17 13:14:00.089628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.072 [2024-10-17 13:14:00.089754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:dc3a2000 cdw11:7f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.072 [2024-10-17 13:14:00.089775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.072 [2024-10-17 13:14:00.089916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.072 [2024-10-17 13:14:00.089938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:52.072 #46 NEW cov: 12415 ft: 14719 corp: 10/187b lim: 35 exec/s: 0 rss: 73Mb L: 24/27 MS: 1 InsertByte- 00:05:52.332 [2024-10-17 13:14:00.139530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0a0a0a06 cdw11:0a060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.332 [2024-10-17 13:14:00.139559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.332 [2024-10-17 13:14:00.139693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:4c200606 cdw11:00dc0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.332 [2024-10-17 13:14:00.139712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.332 [2024-10-17 13:14:00.139842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00067f00 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.332 [2024-10-17 13:14:00.139861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:52.332 #47 NEW cov: 12415 ft: 14774 corp: 11/213b lim: 35 exec/s: 0 rss: 73Mb L: 26/27 MS: 1 InsertRepeatedBytes- 00:05:52.332 [2024-10-17 13:14:00.189701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:06060a06 cdw11:06060002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.332 [2024-10-17 13:14:00.189731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.332 [2024-10-17 13:14:00.189866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:dc3a2000 cdw11:7f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.332 [2024-10-17 13:14:00.189885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.332 [2024-10-17 13:14:00.190018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.332 [2024-10-17 13:14:00.190037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:52.332 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:05:52.332 #48 NEW cov: 12438 ft: 14823 corp: 12/237b lim: 35 exec/s: 0 rss: 74Mb L: 24/27 MS: 1 ChangeBit- 00:05:52.332 [2024-10-17 13:14:00.259916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.332 [2024-10-17 13:14:00.259948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.332 [2024-10-17 13:14:00.260086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.332 [2024-10-17 13:14:00.260104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.332 [2024-10-17 13:14:00.260233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.332 [2024-10-17 13:14:00.260251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:52.332 #49 NEW cov: 12438 ft: 14904 corp: 13/264b lim: 35 exec/s: 0 rss: 74Mb L: 27/27 MS: 1 ChangeByte- 00:05:52.332 [2024-10-17 13:14:00.329711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00dc4c20 cdw11:3a7f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.332 [2024-10-17 13:14:00.329741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.332 [2024-10-17 13:14:00.329882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.332 [2024-10-17 13:14:00.329902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.332 #50 NEW cov: 12438 ft: 14924 corp: 14/281b lim: 35 exec/s: 50 rss: 74Mb L: 17/27 MS: 1 PersAutoDict- DE: "L \000\334:\177\000\000"- 00:05:52.332 [2024-10-17 13:14:00.380536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffff0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.332 [2024-10-17 13:14:00.380566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.332 [2024-10-17 13:14:00.380692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.332 [2024-10-17 13:14:00.380714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.332 [2024-10-17 13:14:00.380852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.332 [2024-10-17 13:14:00.380871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:52.332 [2024-10-17 13:14:00.381005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ff2affff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.332 [2024-10-17 13:14:00.381025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:52.592 #51 NEW cov: 12438 ft: 15322 corp: 15/309b lim: 35 exec/s: 51 rss: 74Mb L: 28/28 MS: 1 CopyPart- 00:05:52.592 [2024-10-17 13:14:00.450378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:06060a06 cdw11:06060002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.592 [2024-10-17 13:14:00.450410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.592 [2024-10-17 13:14:00.450550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:dc3a2000 cdw11:7f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.592 [2024-10-17 13:14:00.450570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.592 [2024-10-17 13:14:00.450705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.592 [2024-10-17 13:14:00.450725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:52.592 #52 NEW cov: 12438 ft: 15429 corp: 16/333b lim: 35 exec/s: 52 rss: 74Mb L: 24/28 MS: 1 ChangeBit- 00:05:52.592 [2024-10-17 13:14:00.520631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:06060a06 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.592 [2024-10-17 13:14:00.520662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.592 [2024-10-17 13:14:00.520804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.592 [2024-10-17 13:14:00.520824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.592 [2024-10-17 13:14:00.520956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.592 [2024-10-17 13:14:00.520974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:52.592 #53 NEW cov: 12438 ft: 15445 corp: 17/356b lim: 35 exec/s: 53 rss: 74Mb L: 23/28 MS: 1 ShuffleBytes- 00:05:52.592 [2024-10-17 13:14:00.570921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.592 [2024-10-17 13:14:00.570950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.592 [2024-10-17 13:14:00.571087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.592 [2024-10-17 13:14:00.571107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.592 [2024-10-17 13:14:00.571241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff60ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.592 [2024-10-17 13:14:00.571261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:52.592 #54 NEW cov: 12438 ft: 15504 corp: 18/383b lim: 35 exec/s: 54 rss: 74Mb L: 27/28 MS: 1 ChangeByte- 00:05:52.592 [2024-10-17 13:14:00.621093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:06060a06 cdw11:06060002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.592 [2024-10-17 13:14:00.621121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.592 [2024-10-17 13:14:00.621258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:dc3a2000 cdw11:7f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.592 [2024-10-17 13:14:00.621277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.592 [2024-10-17 13:14:00.621409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2000064c cdw11:dc3a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.592 [2024-10-17 13:14:00.621430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:52.592 #55 NEW cov: 12438 ft: 15515 corp: 19/406b lim: 35 exec/s: 55 rss: 74Mb L: 23/28 MS: 1 PersAutoDict- DE: "L \000\334:\177\000\000"- 00:05:52.852 [2024-10-17 13:14:00.670777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:20000a4c cdw11:dc3a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.852 [2024-10-17 13:14:00.670806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.852 #56 NEW cov: 12438 ft: 15535 corp: 20/414b lim: 35 exec/s: 56 rss: 74Mb L: 8/28 MS: 1 EraseBytes- 00:05:52.852 [2024-10-17 13:14:00.741693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:06060a06 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.852 [2024-10-17 13:14:00.741724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.852 [2024-10-17 13:14:00.741861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.852 [2024-10-17 13:14:00.741883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.852 [2024-10-17 13:14:00.742022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:06ff0606 cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.852 [2024-10-17 13:14:00.742043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:52.852 #57 NEW cov: 12438 ft: 15567 corp: 21/437b lim: 35 exec/s: 57 rss: 74Mb L: 23/28 MS: 1 CMP- DE: "\377\377\377\007"- 00:05:52.852 [2024-10-17 13:14:00.812199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:06060a06 cdw11:06060002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.852 [2024-10-17 13:14:00.812227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.852 [2024-10-17 13:14:00.812362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:a9a9a9a9 cdw11:a9a90001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.852 [2024-10-17 13:14:00.812378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.852 [2024-10-17 13:14:00.812520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00dca920 cdw11:3a7f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.852 [2024-10-17 13:14:00.812556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:52.852 [2024-10-17 13:14:00.812706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:06060006 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.852 [2024-10-17 13:14:00.812724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:52.852 #58 NEW cov: 12438 ft: 15585 corp: 22/468b lim: 35 exec/s: 58 rss: 74Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:05:52.852 [2024-10-17 13:14:00.861620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00dc4c20 cdw11:3a7f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.852 [2024-10-17 13:14:00.861647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:52.852 [2024-10-17 13:14:00.861778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff600003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:52.852 [2024-10-17 13:14:00.861797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:52.852 #59 NEW cov: 12438 ft: 15613 corp: 23/485b lim: 35 exec/s: 59 rss: 74Mb L: 17/31 MS: 1 ChangeByte- 00:05:53.111 [2024-10-17 13:14:00.932765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:06060a06 cdw11:06060002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.111 [2024-10-17 13:14:00.932792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.111 [2024-10-17 13:14:00.932924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:064c0000 cdw11:20000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.111 [2024-10-17 13:14:00.932941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:53.111 [2024-10-17 13:14:00.933075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00003a7f cdw11:4c200000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.111 [2024-10-17 13:14:00.933093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:53.111 [2024-10-17 13:14:00.933226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:7f00dc3a cdw11:00060002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.111 [2024-10-17 13:14:00.933245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:53.111 [2024-10-17 13:14:00.933379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:dc3a2000 cdw11:7f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.111 [2024-10-17 13:14:00.933396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:05:53.111 #60 NEW cov: 12438 ft: 15710 corp: 24/520b lim: 35 exec/s: 60 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:05:53.111 [2024-10-17 13:14:01.002723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffff0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.111 [2024-10-17 13:14:01.002751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.111 [2024-10-17 13:14:01.002879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.111 [2024-10-17 13:14:01.002899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:53.111 [2024-10-17 13:14:01.003030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.111 [2024-10-17 13:14:01.003049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:53.111 [2024-10-17 13:14:01.003178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:2aff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.111 [2024-10-17 13:14:01.003196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:53.111 #61 NEW cov: 12438 ft: 15732 corp: 25/549b lim: 35 exec/s: 61 rss: 75Mb L: 29/35 MS: 1 InsertByte- 00:05:53.111 [2024-10-17 13:14:01.072628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:06060a06 cdw11:06060002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.111 [2024-10-17 13:14:01.072657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.111 [2024-10-17 13:14:01.072785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:dc3a2000 cdw11:7f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.111 [2024-10-17 13:14:01.072804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:53.111 [2024-10-17 13:14:01.072945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00060000 cdw11:4c200000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.111 [2024-10-17 13:14:01.072963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:53.112 #62 NEW cov: 12438 ft: 15740 corp: 26/575b lim: 35 exec/s: 62 rss: 75Mb L: 26/35 MS: 1 InsertRepeatedBytes- 00:05:53.112 [2024-10-17 13:14:01.123353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:06060a06 cdw11:06060002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.112 [2024-10-17 13:14:01.123380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.112 [2024-10-17 13:14:01.123508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:dc3a2000 cdw11:7f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.112 [2024-10-17 13:14:01.123527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:53.112 [2024-10-17 13:14:01.123659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.112 [2024-10-17 13:14:01.123676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:53.112 [2024-10-17 13:14:01.123803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:20000648 cdw11:dc3a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.112 [2024-10-17 13:14:01.123822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:53.112 [2024-10-17 13:14:01.123952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:06060006 cdw11:06170000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.112 [2024-10-17 13:14:01.123974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:05:53.370 #63 NEW cov: 12438 ft: 15747 corp: 27/610b lim: 35 exec/s: 63 rss: 75Mb L: 35/35 MS: 1 CopyPart- 00:05:53.370 [2024-10-17 13:14:01.193035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.370 [2024-10-17 13:14:01.193065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.370 [2024-10-17 13:14:01.193195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.370 [2024-10-17 13:14:01.193214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:53.370 [2024-10-17 13:14:01.193351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00006009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.370 [2024-10-17 13:14:01.193371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:53.371 #64 NEW cov: 12438 ft: 15759 corp: 28/637b lim: 35 exec/s: 64 rss: 75Mb L: 27/35 MS: 1 ChangeBinInt- 00:05:53.371 [2024-10-17 13:14:01.262873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.371 [2024-10-17 13:14:01.262901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.371 [2024-10-17 13:14:01.263032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.371 [2024-10-17 13:14:01.263051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:53.371 #65 NEW cov: 12438 ft: 15766 corp: 29/654b lim: 35 exec/s: 65 rss: 75Mb L: 17/35 MS: 1 EraseBytes- 00:05:53.371 [2024-10-17 13:14:01.313397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:06060a06 cdw11:06060002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.371 [2024-10-17 13:14:01.313425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:53.371 [2024-10-17 13:14:01.313563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f63a2000 cdw11:7f000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.371 [2024-10-17 13:14:01.313582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:53.371 [2024-10-17 13:14:01.313714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:dc3a2000 cdw11:7f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.371 [2024-10-17 13:14:01.313732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:53.371 #66 NEW cov: 12438 ft: 15770 corp: 30/677b lim: 35 exec/s: 33 rss: 75Mb L: 23/35 MS: 1 ChangeByte- 00:05:53.371 #66 DONE cov: 12438 ft: 15770 corp: 30/677b lim: 35 exec/s: 33 rss: 75Mb 00:05:53.371 ###### Recommended dictionary. ###### 00:05:53.371 "L \000\334:\177\000\000" # Uses: 4 00:05:53.371 "\377\377\377\007" # Uses: 0 00:05:53.371 ###### End of recommended dictionary. ###### 00:05:53.371 Done 66 runs in 2 second(s) 00:05:53.630 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:05:53.630 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:05:53.630 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:05:53.630 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:05:53.630 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:05:53.630 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:05:53.630 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:05:53.630 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:05:53.630 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:05:53.630 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:05:53.630 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:05:53.630 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:05:53.630 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:05:53.630 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:05:53.630 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:05:53.630 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:05:53.630 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:05:53.630 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:05:53.630 13:14:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:05:53.630 [2024-10-17 13:14:01.491981] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:53.630 [2024-10-17 13:14:01.492044] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3841977 ] 00:05:53.630 [2024-10-17 13:14:01.673593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.889 [2024-10-17 13:14:01.707568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.889 [2024-10-17 13:14:01.766354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.889 [2024-10-17 13:14:01.782713] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:05:53.889 INFO: Running with entropic power schedule (0xFF, 100). 00:05:53.889 INFO: Seed: 3021288980 00:05:53.889 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:05:53.889 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:05:53.889 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:05:53.889 INFO: A corpus is not provided, starting from an empty corpus 00:05:53.889 #2 INITED exec/s: 0 rss: 65Mb 00:05:53.889 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:05:53.889 This may also happen if the target rejected all inputs we tried so far 00:05:53.889 [2024-10-17 13:14:01.849056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:53.889 [2024-10-17 13:14:01.849093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.147 NEW_FUNC[1/715]: 0x443ef8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:05:54.147 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:05:54.147 #12 NEW cov: 12223 ft: 12221 corp: 2/17b lim: 45 exec/s: 0 rss: 73Mb L: 16/16 MS: 5 ChangeBit-CopyPart-ChangeByte-EraseBytes-InsertRepeatedBytes- 00:05:54.406 [2024-10-17 13:14:02.199953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.406 [2024-10-17 13:14:02.199996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.406 #13 NEW cov: 12336 ft: 12885 corp: 3/34b lim: 45 exec/s: 0 rss: 73Mb L: 17/17 MS: 1 InsertByte- 00:05:54.406 [2024-10-17 13:14:02.270856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e5e534e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.406 [2024-10-17 13:14:02.270888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.406 [2024-10-17 13:14:02.271011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.406 [2024-10-17 13:14:02.271031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.406 [2024-10-17 13:14:02.271144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.406 [2024-10-17 13:14:02.271169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.406 [2024-10-17 13:14:02.271285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.406 [2024-10-17 13:14:02.271303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:54.406 #16 NEW cov: 12342 ft: 13853 corp: 4/75b lim: 45 exec/s: 0 rss: 73Mb L: 41/41 MS: 3 CopyPart-InsertByte-InsertRepeatedBytes- 00:05:54.406 [2024-10-17 13:14:02.331105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e5e534e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.406 [2024-10-17 13:14:02.331133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.406 [2024-10-17 13:14:02.331263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e5e5e57a cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.406 [2024-10-17 13:14:02.331280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.406 [2024-10-17 13:14:02.331402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.406 [2024-10-17 13:14:02.331421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.406 [2024-10-17 13:14:02.331535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.406 [2024-10-17 13:14:02.331552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:54.406 #17 NEW cov: 12427 ft: 14090 corp: 5/117b lim: 45 exec/s: 0 rss: 73Mb L: 42/42 MS: 1 InsertByte- 00:05:54.406 [2024-10-17 13:14:02.390392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.406 [2024-10-17 13:14:02.390418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.406 #18 NEW cov: 12427 ft: 14328 corp: 6/128b lim: 45 exec/s: 0 rss: 73Mb L: 11/42 MS: 1 EraseBytes- 00:05:54.406 [2024-10-17 13:14:02.451342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e5e534e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.406 [2024-10-17 13:14:02.451368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.407 [2024-10-17 13:14:02.451483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.407 [2024-10-17 13:14:02.451501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.407 [2024-10-17 13:14:02.451621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.407 [2024-10-17 13:14:02.451637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.407 [2024-10-17 13:14:02.451773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.407 [2024-10-17 13:14:02.451791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:54.666 #19 NEW cov: 12427 ft: 14446 corp: 7/169b lim: 45 exec/s: 0 rss: 73Mb L: 41/42 MS: 1 ShuffleBytes- 00:05:54.666 [2024-10-17 13:14:02.500748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.666 [2024-10-17 13:14:02.500776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.666 #20 NEW cov: 12427 ft: 14529 corp: 8/186b lim: 45 exec/s: 0 rss: 73Mb L: 17/42 MS: 1 ShuffleBytes- 00:05:54.666 [2024-10-17 13:14:02.551688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e5e534e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.666 [2024-10-17 13:14:02.551716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.666 [2024-10-17 13:14:02.551838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.666 [2024-10-17 13:14:02.551857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.666 [2024-10-17 13:14:02.551978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.666 [2024-10-17 13:14:02.551996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.666 [2024-10-17 13:14:02.552113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.666 [2024-10-17 13:14:02.552130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:54.666 #21 NEW cov: 12427 ft: 14597 corp: 9/227b lim: 45 exec/s: 0 rss: 73Mb L: 41/42 MS: 1 CopyPart- 00:05:54.666 [2024-10-17 13:14:02.621119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.666 [2024-10-17 13:14:02.621148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.666 #22 NEW cov: 12427 ft: 14639 corp: 10/244b lim: 45 exec/s: 0 rss: 74Mb L: 17/42 MS: 1 CopyPart- 00:05:54.666 [2024-10-17 13:14:02.692219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e5e534e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.666 [2024-10-17 13:14:02.692249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.666 [2024-10-17 13:14:02.692375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.666 [2024-10-17 13:14:02.692396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.666 [2024-10-17 13:14:02.692518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.666 [2024-10-17 13:14:02.692535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.666 [2024-10-17 13:14:02.692651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8787e587 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.666 [2024-10-17 13:14:02.692670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:54.924 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:05:54.925 #23 NEW cov: 12450 ft: 14728 corp: 11/288b lim: 45 exec/s: 0 rss: 74Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:05:54.925 [2024-10-17 13:14:02.762377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e5e534e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.925 [2024-10-17 13:14:02.762405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.925 [2024-10-17 13:14:02.762524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.925 [2024-10-17 13:14:02.762540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.925 [2024-10-17 13:14:02.762658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.925 [2024-10-17 13:14:02.762678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.925 [2024-10-17 13:14:02.762794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.925 [2024-10-17 13:14:02.762812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:54.925 #24 NEW cov: 12450 ft: 14754 corp: 12/329b lim: 45 exec/s: 0 rss: 74Mb L: 41/44 MS: 1 CrossOver- 00:05:54.925 [2024-10-17 13:14:02.812515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e5e534e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.925 [2024-10-17 13:14:02.812542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.925 [2024-10-17 13:14:02.812653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.925 [2024-10-17 13:14:02.812669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.925 [2024-10-17 13:14:02.812784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.925 [2024-10-17 13:14:02.812800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.925 [2024-10-17 13:14:02.812915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.925 [2024-10-17 13:14:02.812933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:54.925 #25 NEW cov: 12450 ft: 14789 corp: 13/370b lim: 45 exec/s: 25 rss: 74Mb L: 41/44 MS: 1 ShuffleBytes- 00:05:54.925 [2024-10-17 13:14:02.861739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76780003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.925 [2024-10-17 13:14:02.861769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.925 #26 NEW cov: 12450 ft: 14829 corp: 14/387b lim: 45 exec/s: 26 rss: 74Mb L: 17/44 MS: 1 ChangeBinInt- 00:05:54.925 [2024-10-17 13:14:02.931969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.925 [2024-10-17 13:14:02.931996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.925 #27 NEW cov: 12450 ft: 14844 corp: 15/404b lim: 45 exec/s: 27 rss: 74Mb L: 17/44 MS: 1 ChangeBinInt- 00:05:55.184 [2024-10-17 13:14:02.982169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76780003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.184 [2024-10-17 13:14:02.982197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.184 #28 NEW cov: 12450 ft: 14883 corp: 16/421b lim: 45 exec/s: 28 rss: 74Mb L: 17/44 MS: 1 ShuffleBytes- 00:05:55.184 [2024-10-17 13:14:03.053203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e5e534e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.184 [2024-10-17 13:14:03.053230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.184 [2024-10-17 13:14:03.053346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.184 [2024-10-17 13:14:03.053362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.184 [2024-10-17 13:14:03.053482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.184 [2024-10-17 13:14:03.053516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:55.184 [2024-10-17 13:14:03.053637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8787e587 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.184 [2024-10-17 13:14:03.053655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:55.184 #29 NEW cov: 12450 ft: 14934 corp: 17/462b lim: 45 exec/s: 29 rss: 74Mb L: 41/44 MS: 1 CrossOver- 00:05:55.184 [2024-10-17 13:14:03.102514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.184 [2024-10-17 13:14:03.102541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.184 #30 NEW cov: 12450 ft: 15082 corp: 18/479b lim: 45 exec/s: 30 rss: 74Mb L: 17/44 MS: 1 ChangeBit- 00:05:55.184 [2024-10-17 13:14:03.152730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.184 [2024-10-17 13:14:03.152757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.184 #31 NEW cov: 12450 ft: 15090 corp: 19/493b lim: 45 exec/s: 31 rss: 74Mb L: 14/44 MS: 1 EraseBytes- 00:05:55.184 [2024-10-17 13:14:03.203768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e5e534e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.184 [2024-10-17 13:14:03.203794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.184 [2024-10-17 13:14:03.203914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.184 [2024-10-17 13:14:03.203938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.184 [2024-10-17 13:14:03.204057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.184 [2024-10-17 13:14:03.204074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:55.184 [2024-10-17 13:14:03.204201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.184 [2024-10-17 13:14:03.204218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:55.184 #32 NEW cov: 12450 ft: 15123 corp: 20/534b lim: 45 exec/s: 32 rss: 74Mb L: 41/44 MS: 1 ChangeBit- 00:05:55.443 [2024-10-17 13:14:03.253092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.443 [2024-10-17 13:14:03.253118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.443 #33 NEW cov: 12450 ft: 15126 corp: 21/549b lim: 45 exec/s: 33 rss: 74Mb L: 15/44 MS: 1 EraseBytes- 00:05:55.443 [2024-10-17 13:14:03.324092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e5e534e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.443 [2024-10-17 13:14:03.324119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.443 [2024-10-17 13:14:03.324240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.443 [2024-10-17 13:14:03.324257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.443 [2024-10-17 13:14:03.324373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.443 [2024-10-17 13:14:03.324392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:55.443 [2024-10-17 13:14:03.324507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.443 [2024-10-17 13:14:03.324526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:55.443 #34 NEW cov: 12450 ft: 15135 corp: 22/589b lim: 45 exec/s: 34 rss: 74Mb L: 40/44 MS: 1 EraseBytes- 00:05:55.443 [2024-10-17 13:14:03.373541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.443 [2024-10-17 13:14:03.373569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.443 #35 NEW cov: 12450 ft: 15140 corp: 23/606b lim: 45 exec/s: 35 rss: 74Mb L: 17/44 MS: 1 ChangeByte- 00:05:55.443 [2024-10-17 13:14:03.424553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e5e534e5 cdw11:e5e50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.443 [2024-10-17 13:14:03.424581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.443 [2024-10-17 13:14:03.424692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.443 [2024-10-17 13:14:03.424710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.443 [2024-10-17 13:14:03.424834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.443 [2024-10-17 13:14:03.424851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:55.443 [2024-10-17 13:14:03.424971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8787e587 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.444 [2024-10-17 13:14:03.424990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:55.444 #36 NEW cov: 12450 ft: 15150 corp: 24/650b lim: 45 exec/s: 36 rss: 74Mb L: 44/44 MS: 1 ChangeByte- 00:05:55.703 [2024-10-17 13:14:03.494764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e5e534e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.703 [2024-10-17 13:14:03.494794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.703 [2024-10-17 13:14:03.494909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.703 [2024-10-17 13:14:03.494926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.703 [2024-10-17 13:14:03.495044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e576e5e5 cdw11:3f9f0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.703 [2024-10-17 13:14:03.495061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:55.703 [2024-10-17 13:14:03.495160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.703 [2024-10-17 13:14:03.495181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:55.703 #37 NEW cov: 12450 ft: 15163 corp: 25/694b lim: 45 exec/s: 37 rss: 74Mb L: 44/44 MS: 1 CrossOver- 00:05:55.703 [2024-10-17 13:14:03.544970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e5e534e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.703 [2024-10-17 13:14:03.545000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.703 [2024-10-17 13:14:03.545113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.703 [2024-10-17 13:14:03.545129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.703 [2024-10-17 13:14:03.545251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.703 [2024-10-17 13:14:03.545271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:55.703 [2024-10-17 13:14:03.545392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.703 [2024-10-17 13:14:03.545411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:55.703 #38 NEW cov: 12450 ft: 15178 corp: 26/735b lim: 45 exec/s: 38 rss: 74Mb L: 41/44 MS: 1 ChangeBit- 00:05:55.703 [2024-10-17 13:14:03.615189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76780003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.703 [2024-10-17 13:14:03.615220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.703 [2024-10-17 13:14:03.615352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.703 [2024-10-17 13:14:03.615371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.703 [2024-10-17 13:14:03.615485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:83838383 cdw11:83830004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.703 [2024-10-17 13:14:03.615506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:55.703 [2024-10-17 13:14:03.615626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:83838383 cdw11:83830004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.703 [2024-10-17 13:14:03.615643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:55.703 #39 NEW cov: 12450 ft: 15202 corp: 27/773b lim: 45 exec/s: 39 rss: 74Mb L: 38/44 MS: 1 InsertRepeatedBytes- 00:05:55.703 [2024-10-17 13:14:03.675289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e5e534e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.703 [2024-10-17 13:14:03.675318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.703 [2024-10-17 13:14:03.675431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e5e5e57a cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.703 [2024-10-17 13:14:03.675450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.703 [2024-10-17 13:14:03.675565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.703 [2024-10-17 13:14:03.675585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:55.703 [2024-10-17 13:14:03.675702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:7676e576 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.703 [2024-10-17 13:14:03.675718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:55.703 #40 NEW cov: 12450 ft: 15214 corp: 28/815b lim: 45 exec/s: 40 rss: 74Mb L: 42/44 MS: 1 CrossOver- 00:05:55.703 [2024-10-17 13:14:03.745577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e5e534e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.703 [2024-10-17 13:14:03.745608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.703 [2024-10-17 13:14:03.745720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.703 [2024-10-17 13:14:03.745737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.703 [2024-10-17 13:14:03.745857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.703 [2024-10-17 13:14:03.745876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:55.703 [2024-10-17 13:14:03.745987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8787e587 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.703 [2024-10-17 13:14:03.746005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:55.963 #41 NEW cov: 12450 ft: 15230 corp: 29/856b lim: 45 exec/s: 41 rss: 75Mb L: 41/44 MS: 1 ChangeByte- 00:05:55.963 [2024-10-17 13:14:03.815550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76340007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.963 [2024-10-17 13:14:03.815580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.963 [2024-10-17 13:14:03.815697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:e5e5e5e5 cdw11:e5760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.963 [2024-10-17 13:14:03.815716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.963 [2024-10-17 13:14:03.815833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e57a7676 cdw11:e5e50007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.963 [2024-10-17 13:14:03.815850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:55.963 #42 NEW cov: 12450 ft: 15485 corp: 30/883b lim: 45 exec/s: 21 rss: 75Mb L: 27/44 MS: 1 CrossOver- 00:05:55.963 #42 DONE cov: 12450 ft: 15485 corp: 30/883b lim: 45 exec/s: 21 rss: 75Mb 00:05:55.963 Done 42 runs in 2 second(s) 00:05:55.963 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:05:55.963 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:05:55.963 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:05:55.963 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:05:55.963 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:05:55.963 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:05:55.963 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:05:55.963 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:05:55.963 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:05:55.963 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:05:55.963 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:05:55.963 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:05:55.963 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:05:55.963 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:05:55.963 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:05:55.963 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:05:55.963 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:05:55.963 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:05:55.963 13:14:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:05:55.963 [2024-10-17 13:14:03.994506] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:55.963 [2024-10-17 13:14:03.994574] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3842515 ] 00:05:56.222 [2024-10-17 13:14:04.169659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.222 [2024-10-17 13:14:04.206550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.222 [2024-10-17 13:14:04.265261] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:56.481 [2024-10-17 13:14:04.281570] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:05:56.481 INFO: Running with entropic power schedule (0xFF, 100). 00:05:56.481 INFO: Seed: 1224334847 00:05:56.481 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:05:56.481 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:05:56.481 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:05:56.481 INFO: A corpus is not provided, starting from an empty corpus 00:05:56.481 #2 INITED exec/s: 0 rss: 65Mb 00:05:56.481 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:05:56.481 This may also happen if the target rejected all inputs we tried so far 00:05:56.481 [2024-10-17 13:14:04.330742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:05:56.481 [2024-10-17 13:14:04.330770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.740 NEW_FUNC[1/712]: 0x446708 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:05:56.740 NEW_FUNC[2/712]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:05:56.740 #3 NEW cov: 12121 ft: 12135 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CrossOver- 00:05:56.740 [2024-10-17 13:14:04.661524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2d cdw11:00000000 00:05:56.740 [2024-10-17 13:14:04.661554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.740 NEW_FUNC[1/1]: 0x1f97e28 in thread_execute_poller /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:957 00:05:56.740 #4 NEW cov: 12253 ft: 12802 corp: 3/5b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 InsertByte- 00:05:56.740 [2024-10-17 13:14:04.701610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a0a cdw11:00000000 00:05:56.740 [2024-10-17 13:14:04.701636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.740 #5 NEW cov: 12259 ft: 12972 corp: 4/7b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ChangeBit- 00:05:56.740 [2024-10-17 13:14:04.761752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f50a cdw11:00000000 00:05:56.740 [2024-10-17 13:14:04.761777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.740 #6 NEW cov: 12344 ft: 13178 corp: 5/9b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ChangeBinInt- 00:05:56.998 [2024-10-17 13:14:04.801834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000700a cdw11:00000000 00:05:56.998 [2024-10-17 13:14:04.801859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.998 #7 NEW cov: 12344 ft: 13371 corp: 6/11b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ChangeByte- 00:05:56.998 [2024-10-17 13:14:04.841979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000700a cdw11:00000000 00:05:56.998 [2024-10-17 13:14:04.842006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.998 #8 NEW cov: 12344 ft: 13478 corp: 7/13b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CopyPart- 00:05:56.998 [2024-10-17 13:14:04.902139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a08 cdw11:00000000 00:05:56.998 [2024-10-17 13:14:04.902177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.998 #9 NEW cov: 12344 ft: 13529 corp: 8/15b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ChangeBit- 00:05:56.998 [2024-10-17 13:14:04.962292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff0a cdw11:00000000 00:05:56.998 [2024-10-17 13:14:04.962318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.998 #10 NEW cov: 12344 ft: 13571 corp: 9/17b lim: 10 exec/s: 0 rss: 74Mb L: 2/2 MS: 1 ChangeByte- 00:05:56.998 [2024-10-17 13:14:05.022447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:05:56.998 [2024-10-17 13:14:05.022474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.998 #12 NEW cov: 12344 ft: 13622 corp: 10/19b lim: 10 exec/s: 0 rss: 74Mb L: 2/2 MS: 2 EraseBytes-CopyPart- 00:05:57.257 [2024-10-17 13:14:05.062577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:05:57.257 [2024-10-17 13:14:05.062602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.257 #13 NEW cov: 12344 ft: 13715 corp: 11/21b lim: 10 exec/s: 0 rss: 74Mb L: 2/2 MS: 1 ShuffleBytes- 00:05:57.257 [2024-10-17 13:14:05.122745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002d70 cdw11:00000000 00:05:57.257 [2024-10-17 13:14:05.122770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.257 #18 NEW cov: 12344 ft: 13731 corp: 12/24b lim: 10 exec/s: 0 rss: 74Mb L: 3/3 MS: 5 EraseBytes-ChangeByte-CopyPart-CopyPart-CrossOver- 00:05:57.257 [2024-10-17 13:14:05.162838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000700a cdw11:00000000 00:05:57.257 [2024-10-17 13:14:05.162863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.257 #19 NEW cov: 12344 ft: 13757 corp: 13/26b lim: 10 exec/s: 0 rss: 74Mb L: 2/3 MS: 1 EraseBytes- 00:05:57.257 [2024-10-17 13:14:05.223304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000700a cdw11:00000000 00:05:57.257 [2024-10-17 13:14:05.223330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.257 [2024-10-17 13:14:05.223385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:57.257 [2024-10-17 13:14:05.223398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.257 [2024-10-17 13:14:05.223451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:57.257 [2024-10-17 13:14:05.223465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:57.257 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:05:57.257 #20 NEW cov: 12367 ft: 14077 corp: 14/33b lim: 10 exec/s: 0 rss: 74Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:05:57.257 [2024-10-17 13:14:05.263162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000670a cdw11:00000000 00:05:57.257 [2024-10-17 13:14:05.263188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.257 #21 NEW cov: 12367 ft: 14098 corp: 15/35b lim: 10 exec/s: 0 rss: 74Mb L: 2/7 MS: 1 InsertByte- 00:05:57.257 [2024-10-17 13:14:05.303213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a3d cdw11:00000000 00:05:57.257 [2024-10-17 13:14:05.303256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.516 #22 NEW cov: 12367 ft: 14122 corp: 16/37b lim: 10 exec/s: 22 rss: 74Mb L: 2/7 MS: 1 ChangeBit- 00:05:57.516 [2024-10-17 13:14:05.363430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002aff cdw11:00000000 00:05:57.516 [2024-10-17 13:14:05.363455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.516 #23 NEW cov: 12367 ft: 14141 corp: 17/39b lim: 10 exec/s: 23 rss: 74Mb L: 2/7 MS: 1 CrossOver- 00:05:57.516 [2024-10-17 13:14:05.423854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000700a cdw11:00000000 00:05:57.516 [2024-10-17 13:14:05.423880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.516 [2024-10-17 13:14:05.423934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:57.516 [2024-10-17 13:14:05.423949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.516 [2024-10-17 13:14:05.424000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:57.516 [2024-10-17 13:14:05.424014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:57.516 #24 NEW cov: 12367 ft: 14157 corp: 18/46b lim: 10 exec/s: 24 rss: 74Mb L: 7/7 MS: 1 CrossOver- 00:05:57.516 [2024-10-17 13:14:05.483731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:05:57.516 [2024-10-17 13:14:05.483756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.516 #25 NEW cov: 12367 ft: 14159 corp: 19/49b lim: 10 exec/s: 25 rss: 74Mb L: 3/7 MS: 1 CrossOver- 00:05:57.516 [2024-10-17 13:14:05.523997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a70 cdw11:00000000 00:05:57.516 [2024-10-17 13:14:05.524022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.516 [2024-10-17 13:14:05.524076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002d0a cdw11:00000000 00:05:57.516 [2024-10-17 13:14:05.524090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.516 #26 NEW cov: 12367 ft: 14339 corp: 20/53b lim: 10 exec/s: 26 rss: 74Mb L: 4/7 MS: 1 CrossOver- 00:05:57.516 [2024-10-17 13:14:05.564014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a0a cdw11:00000000 00:05:57.516 [2024-10-17 13:14:05.564045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.774 #27 NEW cov: 12367 ft: 14353 corp: 21/55b lim: 10 exec/s: 27 rss: 74Mb L: 2/7 MS: 1 CrossOver- 00:05:57.774 [2024-10-17 13:14:05.604237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000700a cdw11:00000000 00:05:57.774 [2024-10-17 13:14:05.604263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.774 [2024-10-17 13:14:05.604315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000700a cdw11:00000000 00:05:57.774 [2024-10-17 13:14:05.604329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.774 #28 NEW cov: 12367 ft: 14370 corp: 22/59b lim: 10 exec/s: 28 rss: 74Mb L: 4/7 MS: 1 CopyPart- 00:05:57.774 [2024-10-17 13:14:05.644443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:05:57.774 [2024-10-17 13:14:05.644468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.774 [2024-10-17 13:14:05.644519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:57.774 [2024-10-17 13:14:05.644536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.774 [2024-10-17 13:14:05.644586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a3d cdw11:00000000 00:05:57.774 [2024-10-17 13:14:05.644600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:57.774 #29 NEW cov: 12367 ft: 14387 corp: 23/65b lim: 10 exec/s: 29 rss: 74Mb L: 6/7 MS: 1 InsertRepeatedBytes- 00:05:57.774 [2024-10-17 13:14:05.704391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000dd2a cdw11:00000000 00:05:57.775 [2024-10-17 13:14:05.704416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.775 #30 NEW cov: 12367 ft: 14393 corp: 24/68b lim: 10 exec/s: 30 rss: 74Mb L: 3/7 MS: 1 InsertByte- 00:05:57.775 [2024-10-17 13:14:05.764558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000671a cdw11:00000000 00:05:57.775 [2024-10-17 13:14:05.764584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.775 #31 NEW cov: 12367 ft: 14416 corp: 25/70b lim: 10 exec/s: 31 rss: 74Mb L: 2/7 MS: 1 ChangeBit- 00:05:57.775 [2024-10-17 13:14:05.824885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a08 cdw11:00000000 00:05:57.775 [2024-10-17 13:14:05.824913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.775 [2024-10-17 13:14:05.824969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002a08 cdw11:00000000 00:05:57.775 [2024-10-17 13:14:05.824988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:58.033 #32 NEW cov: 12367 ft: 14422 corp: 26/74b lim: 10 exec/s: 32 rss: 74Mb L: 4/7 MS: 1 CopyPart- 00:05:58.033 [2024-10-17 13:14:05.864847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000232a cdw11:00000000 00:05:58.033 [2024-10-17 13:14:05.864872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.033 #33 NEW cov: 12367 ft: 14425 corp: 27/77b lim: 10 exec/s: 33 rss: 74Mb L: 3/7 MS: 1 InsertByte- 00:05:58.033 [2024-10-17 13:14:05.904910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a3d cdw11:00000000 00:05:58.033 [2024-10-17 13:14:05.904934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.033 #34 NEW cov: 12367 ft: 14432 corp: 28/80b lim: 10 exec/s: 34 rss: 75Mb L: 3/7 MS: 1 InsertByte- 00:05:58.033 [2024-10-17 13:14:05.965374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000002ff cdw11:00000000 00:05:58.033 [2024-10-17 13:14:05.965399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.033 [2024-10-17 13:14:05.965451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:58.033 [2024-10-17 13:14:05.965465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:58.033 [2024-10-17 13:14:05.965515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a3d cdw11:00000000 00:05:58.033 [2024-10-17 13:14:05.965529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:58.033 #35 NEW cov: 12367 ft: 14445 corp: 29/86b lim: 10 exec/s: 35 rss: 75Mb L: 6/7 MS: 1 ChangeBit- 00:05:58.033 [2024-10-17 13:14:06.025404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:58.033 [2024-10-17 13:14:06.025433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.033 [2024-10-17 13:14:06.025487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff70 cdw11:00000000 00:05:58.033 [2024-10-17 13:14:06.025501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:58.033 #36 NEW cov: 12367 ft: 14452 corp: 30/91b lim: 10 exec/s: 36 rss: 75Mb L: 5/7 MS: 1 InsertRepeatedBytes- 00:05:58.292 [2024-10-17 13:14:06.085495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007070 cdw11:00000000 00:05:58.292 [2024-10-17 13:14:06.085526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.292 #37 NEW cov: 12367 ft: 14608 corp: 31/93b lim: 10 exec/s: 37 rss: 75Mb L: 2/7 MS: 1 CopyPart- 00:05:58.292 [2024-10-17 13:14:06.125712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002d70 cdw11:00000000 00:05:58.292 [2024-10-17 13:14:06.125737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.292 [2024-10-17 13:14:06.125793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:05:58.292 [2024-10-17 13:14:06.125806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:58.292 #38 NEW cov: 12367 ft: 14618 corp: 32/97b lim: 10 exec/s: 38 rss: 75Mb L: 4/7 MS: 1 CrossOver- 00:05:58.292 [2024-10-17 13:14:06.165933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000dd2a cdw11:00000000 00:05:58.292 [2024-10-17 13:14:06.165958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.292 [2024-10-17 13:14:06.166011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a26 cdw11:00000000 00:05:58.292 [2024-10-17 13:14:06.166025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:58.292 [2024-10-17 13:14:06.166077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002626 cdw11:00000000 00:05:58.292 [2024-10-17 13:14:06.166091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:58.292 #39 NEW cov: 12367 ft: 14639 corp: 33/104b lim: 10 exec/s: 39 rss: 75Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:05:58.292 [2024-10-17 13:14:06.226053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:58.292 [2024-10-17 13:14:06.226077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.292 [2024-10-17 13:14:06.226132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff0a cdw11:00000000 00:05:58.292 [2024-10-17 13:14:06.226145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:58.292 [2024-10-17 13:14:06.226202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00003d0a cdw11:00000000 00:05:58.292 [2024-10-17 13:14:06.226216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:58.292 #44 NEW cov: 12367 ft: 14646 corp: 34/110b lim: 10 exec/s: 44 rss: 75Mb L: 6/7 MS: 5 ShuffleBytes-ShuffleBytes-ShuffleBytes-ShuffleBytes-CrossOver- 00:05:58.292 [2024-10-17 13:14:06.265974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a3d cdw11:00000000 00:05:58.292 [2024-10-17 13:14:06.265999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.292 #45 NEW cov: 12367 ft: 14728 corp: 35/112b lim: 10 exec/s: 45 rss: 75Mb L: 2/7 MS: 1 EraseBytes- 00:05:58.292 [2024-10-17 13:14:06.326114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003f2a cdw11:00000000 00:05:58.292 [2024-10-17 13:14:06.326139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.604 #48 NEW cov: 12367 ft: 14739 corp: 36/115b lim: 10 exec/s: 24 rss: 75Mb L: 3/7 MS: 3 EraseBytes-ChangeByte-CrossOver- 00:05:58.604 #48 DONE cov: 12367 ft: 14739 corp: 36/115b lim: 10 exec/s: 24 rss: 75Mb 00:05:58.604 Done 48 runs in 2 second(s) 00:05:58.604 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:05:58.604 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:05:58.604 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:05:58.604 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:05:58.604 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:05:58.604 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:05:58.604 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:05:58.604 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:05:58.604 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:05:58.604 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:05:58.604 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:05:58.604 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:05:58.604 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:05:58.604 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:05:58.604 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:05:58.604 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:05:58.604 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:05:58.604 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:05:58.604 13:14:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:05:58.604 [2024-10-17 13:14:06.498610] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:05:58.604 [2024-10-17 13:14:06.498697] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3842812 ] 00:05:58.863 [2024-10-17 13:14:06.680682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.863 [2024-10-17 13:14:06.714810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.863 [2024-10-17 13:14:06.773889] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:58.863 [2024-10-17 13:14:06.790273] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:05:58.863 INFO: Running with entropic power schedule (0xFF, 100). 00:05:58.863 INFO: Seed: 3734337136 00:05:58.863 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:05:58.863 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:05:58.863 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:05:58.863 INFO: A corpus is not provided, starting from an empty corpus 00:05:58.863 #2 INITED exec/s: 0 rss: 65Mb 00:05:58.863 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:05:58.863 This may also happen if the target rejected all inputs we tried so far 00:05:58.863 [2024-10-17 13:14:06.866552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000032ff cdw11:00000000 00:05:58.863 [2024-10-17 13:14:06.866588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.863 [2024-10-17 13:14:06.866712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:58.863 [2024-10-17 13:14:06.866727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.436 NEW_FUNC[1/712]: 0x447108 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:05:59.436 NEW_FUNC[2/712]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:05:59.436 #7 NEW cov: 12118 ft: 12119 corp: 2/6b lim: 10 exec/s: 0 rss: 73Mb L: 5/5 MS: 5 ChangeBinInt-ShuffleBytes-ChangeByte-ShuffleBytes-CMP- DE: "\377\377\377!"- 00:05:59.436 [2024-10-17 13:14:07.207520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000032ff cdw11:00000000 00:05:59.436 [2024-10-17 13:14:07.207559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.436 [2024-10-17 13:14:07.207681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:59.436 [2024-10-17 13:14:07.207699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.436 NEW_FUNC[1/1]: 0x14d04e8 in nvmf_tcp_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3555 00:05:59.436 #8 NEW cov: 12252 ft: 12769 corp: 3/11b lim: 10 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:05:59.436 [2024-10-17 13:14:07.277827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000032ff cdw11:00000000 00:05:59.436 [2024-10-17 13:14:07.277857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.436 [2024-10-17 13:14:07.277964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:59.436 [2024-10-17 13:14:07.277982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.436 [2024-10-17 13:14:07.278085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000021b7 cdw11:00000000 00:05:59.436 [2024-10-17 13:14:07.278102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:59.436 #9 NEW cov: 12258 ft: 13141 corp: 4/17b lim: 10 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 InsertByte- 00:05:59.436 [2024-10-17 13:14:07.328117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:59.436 [2024-10-17 13:14:07.328143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.436 [2024-10-17 13:14:07.328257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff32 cdw11:00000000 00:05:59.436 [2024-10-17 13:14:07.328275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.436 [2024-10-17 13:14:07.328386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:59.436 [2024-10-17 13:14:07.328402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:59.437 [2024-10-17 13:14:07.328516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff21 cdw11:00000000 00:05:59.437 [2024-10-17 13:14:07.328532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:59.437 #10 NEW cov: 12343 ft: 13587 corp: 5/26b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:05:59.437 [2024-10-17 13:14:07.398285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000032ff cdw11:00000000 00:05:59.437 [2024-10-17 13:14:07.398311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.437 [2024-10-17 13:14:07.398428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:59.437 [2024-10-17 13:14:07.398447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.437 [2024-10-17 13:14:07.398560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:59.437 [2024-10-17 13:14:07.398578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:59.437 [2024-10-17 13:14:07.398687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff21 cdw11:00000000 00:05:59.437 [2024-10-17 13:14:07.398704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:59.437 #11 NEW cov: 12343 ft: 13669 corp: 6/35b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:05:59.437 [2024-10-17 13:14:07.438000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:59.437 [2024-10-17 13:14:07.438026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.437 [2024-10-17 13:14:07.438132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff21 cdw11:00000000 00:05:59.437 [2024-10-17 13:14:07.438147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.437 #12 NEW cov: 12343 ft: 13744 corp: 7/40b lim: 10 exec/s: 0 rss: 74Mb L: 5/9 MS: 1 EraseBytes- 00:05:59.699 [2024-10-17 13:14:07.498696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff00 cdw11:00000000 00:05:59.699 [2024-10-17 13:14:07.498722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.699 [2024-10-17 13:14:07.498841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:59.699 [2024-10-17 13:14:07.498857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.699 [2024-10-17 13:14:07.498966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:05:59.699 [2024-10-17 13:14:07.498980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:59.699 [2024-10-17 13:14:07.499085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:05:59.699 [2024-10-17 13:14:07.499101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:59.699 #13 NEW cov: 12343 ft: 13855 corp: 8/49b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:05:59.699 [2024-10-17 13:14:07.568601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000032ff cdw11:00000000 00:05:59.699 [2024-10-17 13:14:07.568628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.699 [2024-10-17 13:14:07.568738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:59.699 [2024-10-17 13:14:07.568754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.699 [2024-10-17 13:14:07.568855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000021b7 cdw11:00000000 00:05:59.699 [2024-10-17 13:14:07.568872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:59.699 #14 NEW cov: 12343 ft: 13887 corp: 9/55b lim: 10 exec/s: 0 rss: 74Mb L: 6/9 MS: 1 EraseBytes- 00:05:59.699 [2024-10-17 13:14:07.618287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005d0a cdw11:00000000 00:05:59.699 [2024-10-17 13:14:07.618314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.699 #15 NEW cov: 12343 ft: 14120 corp: 10/57b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 1 InsertByte- 00:05:59.699 [2024-10-17 13:14:07.669018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000032ff cdw11:00000000 00:05:59.699 [2024-10-17 13:14:07.669044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.699 [2024-10-17 13:14:07.669162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:59.699 [2024-10-17 13:14:07.669191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.699 [2024-10-17 13:14:07.669306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:59.699 [2024-10-17 13:14:07.669323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:59.699 [2024-10-17 13:14:07.669424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 00:05:59.699 [2024-10-17 13:14:07.669443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:59.699 #16 NEW cov: 12343 ft: 14154 corp: 11/66b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:05:59.699 [2024-10-17 13:14:07.718595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000032ff cdw11:00000000 00:05:59.700 [2024-10-17 13:14:07.718621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.958 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:05:59.958 #17 NEW cov: 12366 ft: 14210 corp: 12/69b lim: 10 exec/s: 0 rss: 74Mb L: 3/9 MS: 1 EraseBytes- 00:05:59.958 [2024-10-17 13:14:07.789673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:59.958 [2024-10-17 13:14:07.789700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.958 [2024-10-17 13:14:07.789818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:59.958 [2024-10-17 13:14:07.789835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.958 [2024-10-17 13:14:07.789944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000032ff cdw11:00000000 00:05:59.958 [2024-10-17 13:14:07.789962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:59.958 [2024-10-17 13:14:07.790073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:59.958 [2024-10-17 13:14:07.790092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:59.958 [2024-10-17 13:14:07.790207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:000021b7 cdw11:00000000 00:05:59.958 [2024-10-17 13:14:07.790225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:05:59.958 #18 NEW cov: 12366 ft: 14264 corp: 13/79b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:05:59.958 [2024-10-17 13:14:07.839659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff00 cdw11:00000000 00:05:59.958 [2024-10-17 13:14:07.839687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.958 [2024-10-17 13:14:07.839802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:59.958 [2024-10-17 13:14:07.839819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.958 [2024-10-17 13:14:07.839926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:05:59.958 [2024-10-17 13:14:07.839945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:59.958 [2024-10-17 13:14:07.840052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:05:59.958 [2024-10-17 13:14:07.840071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:59.958 #19 NEW cov: 12366 ft: 14310 corp: 14/88b lim: 10 exec/s: 19 rss: 74Mb L: 9/10 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:05:59.958 [2024-10-17 13:14:07.909511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff21 cdw11:00000000 00:05:59.958 [2024-10-17 13:14:07.909540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.958 [2024-10-17 13:14:07.909659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:59.958 [2024-10-17 13:14:07.909676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.958 #20 NEW cov: 12366 ft: 14426 corp: 15/93b lim: 10 exec/s: 20 rss: 74Mb L: 5/10 MS: 1 ShuffleBytes- 00:05:59.958 [2024-10-17 13:14:07.979636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff21 cdw11:00000000 00:05:59.959 [2024-10-17 13:14:07.979666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.959 [2024-10-17 13:14:07.979778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:05:59.959 [2024-10-17 13:14:07.979799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.217 #21 NEW cov: 12366 ft: 14435 corp: 16/97b lim: 10 exec/s: 21 rss: 74Mb L: 4/10 MS: 1 EraseBytes- 00:06:00.217 [2024-10-17 13:14:08.050526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f8f8 cdw11:00000000 00:06:00.217 [2024-10-17 13:14:08.050552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.217 [2024-10-17 13:14:08.050659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f8f8 cdw11:00000000 00:06:00.217 [2024-10-17 13:14:08.050677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.217 [2024-10-17 13:14:08.050801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f832 cdw11:00000000 00:06:00.217 [2024-10-17 13:14:08.050824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.217 [2024-10-17 13:14:08.050943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:00.217 [2024-10-17 13:14:08.050961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.217 [2024-10-17 13:14:08.051067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff21 cdw11:00000000 00:06:00.217 [2024-10-17 13:14:08.051084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:00.217 #22 NEW cov: 12366 ft: 14450 corp: 17/107b lim: 10 exec/s: 22 rss: 74Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:00.217 [2024-10-17 13:14:08.100025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff21 cdw11:00000000 00:06:00.217 [2024-10-17 13:14:08.100052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.217 [2024-10-17 13:14:08.100166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:00.217 [2024-10-17 13:14:08.100183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.217 #23 NEW cov: 12366 ft: 14574 corp: 18/112b lim: 10 exec/s: 23 rss: 74Mb L: 5/10 MS: 1 InsertByte- 00:06:00.217 [2024-10-17 13:14:08.160315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:00.217 [2024-10-17 13:14:08.160343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.217 [2024-10-17 13:14:08.160462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000021ff cdw11:00000000 00:06:00.217 [2024-10-17 13:14:08.160478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.217 #24 NEW cov: 12366 ft: 14649 corp: 19/117b lim: 10 exec/s: 24 rss: 75Mb L: 5/10 MS: 1 ShuffleBytes- 00:06:00.217 [2024-10-17 13:14:08.230468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cdff cdw11:00000000 00:06:00.217 [2024-10-17 13:14:08.230496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.218 [2024-10-17 13:14:08.230608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:00.218 [2024-10-17 13:14:08.230625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.218 #25 NEW cov: 12366 ft: 14679 corp: 20/122b lim: 10 exec/s: 25 rss: 75Mb L: 5/10 MS: 1 ChangeBinInt- 00:06:00.477 [2024-10-17 13:14:08.280774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000032ff cdw11:00000000 00:06:00.478 [2024-10-17 13:14:08.280803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.478 [2024-10-17 13:14:08.280924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:00.478 [2024-10-17 13:14:08.280941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.478 [2024-10-17 13:14:08.281059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005721 cdw11:00000000 00:06:00.478 [2024-10-17 13:14:08.281080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.478 #26 NEW cov: 12366 ft: 14724 corp: 21/129b lim: 10 exec/s: 26 rss: 75Mb L: 7/10 MS: 1 InsertByte- 00:06:00.478 [2024-10-17 13:14:08.330770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:00.478 [2024-10-17 13:14:08.330798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.478 [2024-10-17 13:14:08.330910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:00.478 [2024-10-17 13:14:08.330928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.478 #27 NEW cov: 12366 ft: 14753 corp: 22/134b lim: 10 exec/s: 27 rss: 75Mb L: 5/10 MS: 1 CrossOver- 00:06:00.478 [2024-10-17 13:14:08.401147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff21 cdw11:00000000 00:06:00.478 [2024-10-17 13:14:08.401178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.478 [2024-10-17 13:14:08.401296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:00.478 [2024-10-17 13:14:08.401314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.478 [2024-10-17 13:14:08.401420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:00.478 [2024-10-17 13:14:08.401436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.478 #28 NEW cov: 12366 ft: 14808 corp: 23/141b lim: 10 exec/s: 28 rss: 75Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:06:00.478 [2024-10-17 13:14:08.441391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff21 cdw11:00000000 00:06:00.478 [2024-10-17 13:14:08.441416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.478 [2024-10-17 13:14:08.441523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003fff cdw11:00000000 00:06:00.478 [2024-10-17 13:14:08.441539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.478 [2024-10-17 13:14:08.441651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:00.478 [2024-10-17 13:14:08.441669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.478 [2024-10-17 13:14:08.441790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:00.478 [2024-10-17 13:14:08.441807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.478 #29 NEW cov: 12366 ft: 14843 corp: 24/149b lim: 10 exec/s: 29 rss: 75Mb L: 8/10 MS: 1 InsertByte- 00:06:00.478 [2024-10-17 13:14:08.501591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000326f cdw11:00000000 00:06:00.478 [2024-10-17 13:14:08.501617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.478 [2024-10-17 13:14:08.501728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006f6f cdw11:00000000 00:06:00.478 [2024-10-17 13:14:08.501744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.478 [2024-10-17 13:14:08.501857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:00.478 [2024-10-17 13:14:08.501871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.478 [2024-10-17 13:14:08.501983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff21 cdw11:00000000 00:06:00.478 [2024-10-17 13:14:08.502002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.478 #30 NEW cov: 12366 ft: 14927 corp: 25/157b lim: 10 exec/s: 30 rss: 75Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:06:00.737 [2024-10-17 13:14:08.551536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000032ff cdw11:00000000 00:06:00.737 [2024-10-17 13:14:08.551562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.737 [2024-10-17 13:14:08.551675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000fbff cdw11:00000000 00:06:00.737 [2024-10-17 13:14:08.551693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.737 [2024-10-17 13:14:08.551803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005721 cdw11:00000000 00:06:00.737 [2024-10-17 13:14:08.551820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.737 #31 NEW cov: 12366 ft: 14969 corp: 26/164b lim: 10 exec/s: 31 rss: 75Mb L: 7/10 MS: 1 ChangeBit- 00:06:00.737 [2024-10-17 13:14:08.621868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff21 cdw11:00000000 00:06:00.737 [2024-10-17 13:14:08.621894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.737 [2024-10-17 13:14:08.622007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003fdf cdw11:00000000 00:06:00.737 [2024-10-17 13:14:08.622024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.737 [2024-10-17 13:14:08.622137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:00.737 [2024-10-17 13:14:08.622157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.737 [2024-10-17 13:14:08.622265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:00.737 [2024-10-17 13:14:08.622282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.737 #32 NEW cov: 12366 ft: 14982 corp: 27/172b lim: 10 exec/s: 32 rss: 75Mb L: 8/10 MS: 1 ChangeBit- 00:06:00.737 [2024-10-17 13:14:08.681931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000032ff cdw11:00000000 00:06:00.737 [2024-10-17 13:14:08.681957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.737 [2024-10-17 13:14:08.682070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f9ff cdw11:00000000 00:06:00.737 [2024-10-17 13:14:08.682088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.737 [2024-10-17 13:14:08.682196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000021b7 cdw11:00000000 00:06:00.737 [2024-10-17 13:14:08.682214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.738 #33 NEW cov: 12366 ft: 14993 corp: 28/178b lim: 10 exec/s: 33 rss: 75Mb L: 6/10 MS: 1 ChangeBinInt- 00:06:00.738 [2024-10-17 13:14:08.722531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:00.738 [2024-10-17 13:14:08.722557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.738 [2024-10-17 13:14:08.722673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:00.738 [2024-10-17 13:14:08.722690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.738 [2024-10-17 13:14:08.722795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff21 cdw11:00000000 00:06:00.738 [2024-10-17 13:14:08.722812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.738 [2024-10-17 13:14:08.722920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000b7ff cdw11:00000000 00:06:00.738 [2024-10-17 13:14:08.722937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.738 [2024-10-17 13:14:08.723044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:000021b7 cdw11:00000000 00:06:00.738 [2024-10-17 13:14:08.723063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:00.738 #34 NEW cov: 12366 ft: 15010 corp: 29/188b lim: 10 exec/s: 34 rss: 75Mb L: 10/10 MS: 1 CopyPart- 00:06:00.738 [2024-10-17 13:14:08.762335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:00.738 [2024-10-17 13:14:08.762360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.738 [2024-10-17 13:14:08.762472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000021ff cdw11:00000000 00:06:00.738 [2024-10-17 13:14:08.762488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.738 [2024-10-17 13:14:08.762600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a2a2 cdw11:00000000 00:06:00.738 [2024-10-17 13:14:08.762618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.738 [2024-10-17 13:14:08.762731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000a22d cdw11:00000000 00:06:00.738 [2024-10-17 13:14:08.762749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.998 #35 NEW cov: 12366 ft: 15022 corp: 30/196b lim: 10 exec/s: 35 rss: 75Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:06:00.998 [2024-10-17 13:14:08.832097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c832 cdw11:00000000 00:06:00.998 [2024-10-17 13:14:08.832124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.998 [2024-10-17 13:14:08.832237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff21 cdw11:00000000 00:06:00.998 [2024-10-17 13:14:08.832254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.998 #36 NEW cov: 12366 ft: 15031 corp: 31/200b lim: 10 exec/s: 18 rss: 75Mb L: 4/10 MS: 1 InsertByte- 00:06:00.998 #36 DONE cov: 12366 ft: 15031 corp: 31/200b lim: 10 exec/s: 18 rss: 75Mb 00:06:00.998 ###### Recommended dictionary. ###### 00:06:00.998 "\377\377\377!" # Uses: 0 00:06:00.998 "\000\000\000\000\000\000\000\000" # Uses: 1 00:06:00.998 ###### End of recommended dictionary. ###### 00:06:00.998 Done 36 runs in 2 second(s) 00:06:00.998 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:00.998 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:00.998 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:00.998 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:00.998 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:00.998 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:00.998 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:00.998 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:00.998 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:00.998 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:00.998 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:00.998 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:00.998 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:00.998 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:00.998 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:00.998 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:00.998 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:00.998 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:00.998 13:14:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:00.998 [2024-10-17 13:14:09.020964] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:00.998 [2024-10-17 13:14:09.021034] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3843330 ] 00:06:01.257 [2024-10-17 13:14:09.199806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.257 [2024-10-17 13:14:09.232701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.257 [2024-10-17 13:14:09.291360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.257 [2024-10-17 13:14:09.307719] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:01.518 INFO: Running with entropic power schedule (0xFF, 100). 00:06:01.518 INFO: Seed: 1955348621 00:06:01.518 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:06:01.518 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:06:01.518 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:01.518 INFO: A corpus is not provided, starting from an empty corpus 00:06:01.518 [2024-10-17 13:14:09.356805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.518 [2024-10-17 13:14:09.356832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.518 #2 INITED cov: 12168 ft: 12145 corp: 1/1b exec/s: 0 rss: 71Mb 00:06:01.518 [2024-10-17 13:14:09.396875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.518 [2024-10-17 13:14:09.396903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.518 #3 NEW cov: 12281 ft: 12664 corp: 2/2b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ShuffleBytes- 00:06:01.518 [2024-10-17 13:14:09.457262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.518 [2024-10-17 13:14:09.457292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.518 [2024-10-17 13:14:09.457351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.518 [2024-10-17 13:14:09.457367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:01.518 #4 NEW cov: 12287 ft: 13555 corp: 3/4b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CopyPart- 00:06:01.518 [2024-10-17 13:14:09.497094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.518 [2024-10-17 13:14:09.497120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.518 #5 NEW cov: 12372 ft: 13772 corp: 4/5b lim: 5 exec/s: 0 rss: 72Mb L: 1/2 MS: 1 ChangeByte- 00:06:01.518 [2024-10-17 13:14:09.557972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.518 [2024-10-17 13:14:09.557999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.518 [2024-10-17 13:14:09.558057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.518 [2024-10-17 13:14:09.558071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:01.518 [2024-10-17 13:14:09.558129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.518 [2024-10-17 13:14:09.558143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:01.518 [2024-10-17 13:14:09.558206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.518 [2024-10-17 13:14:09.558220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:01.518 [2024-10-17 13:14:09.558276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.518 [2024-10-17 13:14:09.558290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:01.777 #6 NEW cov: 12372 ft: 14238 corp: 5/10b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:01.777 [2024-10-17 13:14:09.597388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.777 [2024-10-17 13:14:09.597414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.777 #7 NEW cov: 12372 ft: 14431 corp: 6/11b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:06:01.777 [2024-10-17 13:14:09.638134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.777 [2024-10-17 13:14:09.638165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.777 [2024-10-17 13:14:09.638241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.777 [2024-10-17 13:14:09.638256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:01.777 [2024-10-17 13:14:09.638317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.777 [2024-10-17 13:14:09.638331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:01.777 [2024-10-17 13:14:09.638389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.777 [2024-10-17 13:14:09.638403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:01.777 [2024-10-17 13:14:09.638461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.777 [2024-10-17 13:14:09.638475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:01.777 #8 NEW cov: 12372 ft: 14496 corp: 7/16b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:06:01.777 [2024-10-17 13:14:09.698311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.777 [2024-10-17 13:14:09.698338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.777 [2024-10-17 13:14:09.698413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.777 [2024-10-17 13:14:09.698427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:01.777 [2024-10-17 13:14:09.698489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.777 [2024-10-17 13:14:09.698503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:01.777 [2024-10-17 13:14:09.698561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.777 [2024-10-17 13:14:09.698575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:01.777 [2024-10-17 13:14:09.698635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.777 [2024-10-17 13:14:09.698649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:01.777 #9 NEW cov: 12372 ft: 14528 corp: 8/21b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:01.777 [2024-10-17 13:14:09.737783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.777 [2024-10-17 13:14:09.737809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.777 #10 NEW cov: 12372 ft: 14565 corp: 9/22b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 EraseBytes- 00:06:01.777 [2024-10-17 13:14:09.798121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.777 [2024-10-17 13:14:09.798147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.777 [2024-10-17 13:14:09.798228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:01.777 [2024-10-17 13:14:09.798242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:01.777 #11 NEW cov: 12372 ft: 14618 corp: 10/24b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 ChangeBit- 00:06:02.037 [2024-10-17 13:14:09.838052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.037 [2024-10-17 13:14:09.838079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.037 #12 NEW cov: 12372 ft: 14716 corp: 11/25b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 EraseBytes- 00:06:02.037 [2024-10-17 13:14:09.878866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.037 [2024-10-17 13:14:09.878898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.037 [2024-10-17 13:14:09.878975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.037 [2024-10-17 13:14:09.878997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.037 [2024-10-17 13:14:09.879071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.037 [2024-10-17 13:14:09.879092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:02.037 [2024-10-17 13:14:09.879175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.037 [2024-10-17 13:14:09.879191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:02.037 [2024-10-17 13:14:09.879253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.037 [2024-10-17 13:14:09.879274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:02.037 #13 NEW cov: 12372 ft: 14759 corp: 12/30b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeBit- 00:06:02.037 [2024-10-17 13:14:09.918913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.037 [2024-10-17 13:14:09.918941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.037 [2024-10-17 13:14:09.919000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.037 [2024-10-17 13:14:09.919014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.037 [2024-10-17 13:14:09.919073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.037 [2024-10-17 13:14:09.919090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:02.037 [2024-10-17 13:14:09.919156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.037 [2024-10-17 13:14:09.919172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:02.037 [2024-10-17 13:14:09.919230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.037 [2024-10-17 13:14:09.919249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:02.037 #14 NEW cov: 12372 ft: 14795 corp: 13/35b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CrossOver- 00:06:02.037 [2024-10-17 13:14:09.958893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.037 [2024-10-17 13:14:09.958918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.037 [2024-10-17 13:14:09.958979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.037 [2024-10-17 13:14:09.958994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.037 [2024-10-17 13:14:09.959052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.037 [2024-10-17 13:14:09.959066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:02.037 [2024-10-17 13:14:09.959125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.037 [2024-10-17 13:14:09.959139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:02.037 #15 NEW cov: 12372 ft: 14818 corp: 14/39b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:02.037 [2024-10-17 13:14:09.998836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.037 [2024-10-17 13:14:09.998862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.037 [2024-10-17 13:14:09.998919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.037 [2024-10-17 13:14:09.998933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.037 [2024-10-17 13:14:09.998992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.037 [2024-10-17 13:14:09.999006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:02.037 #16 NEW cov: 12372 ft: 15025 corp: 15/42b lim: 5 exec/s: 0 rss: 72Mb L: 3/5 MS: 1 InsertByte- 00:06:02.037 [2024-10-17 13:14:10.078752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.037 [2024-10-17 13:14:10.078783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.296 #17 NEW cov: 12372 ft: 15068 corp: 16/43b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeBinInt- 00:06:02.296 [2024-10-17 13:14:10.119338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.296 [2024-10-17 13:14:10.119368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.296 [2024-10-17 13:14:10.119430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.296 [2024-10-17 13:14:10.119444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.296 [2024-10-17 13:14:10.119502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.296 [2024-10-17 13:14:10.119519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:02.296 [2024-10-17 13:14:10.119580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.296 [2024-10-17 13:14:10.119594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:02.296 #18 NEW cov: 12372 ft: 15118 corp: 17/47b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:02.296 [2024-10-17 13:14:10.179199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.296 [2024-10-17 13:14:10.179225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.296 [2024-10-17 13:14:10.179286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.296 [2024-10-17 13:14:10.179301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.296 #19 NEW cov: 12372 ft: 15121 corp: 18/49b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 CrossOver- 00:06:02.296 [2024-10-17 13:14:10.219656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.296 [2024-10-17 13:14:10.219681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.296 [2024-10-17 13:14:10.219739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.296 [2024-10-17 13:14:10.219753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.296 [2024-10-17 13:14:10.219811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.296 [2024-10-17 13:14:10.219824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:02.296 [2024-10-17 13:14:10.219882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.296 [2024-10-17 13:14:10.219899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:02.669 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:02.669 #20 NEW cov: 12395 ft: 15155 corp: 19/53b lim: 5 exec/s: 20 rss: 73Mb L: 4/5 MS: 1 ChangeByte- 00:06:02.669 [2024-10-17 13:14:10.560132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.669 [2024-10-17 13:14:10.560168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.669 [2024-10-17 13:14:10.560242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.669 [2024-10-17 13:14:10.560257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.669 #21 NEW cov: 12395 ft: 15169 corp: 20/55b lim: 5 exec/s: 21 rss: 73Mb L: 2/5 MS: 1 InsertByte- 00:06:02.669 [2024-10-17 13:14:10.600142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.669 [2024-10-17 13:14:10.600252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.669 [2024-10-17 13:14:10.600307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.669 [2024-10-17 13:14:10.600321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.669 #22 NEW cov: 12395 ft: 15205 corp: 21/57b lim: 5 exec/s: 22 rss: 73Mb L: 2/5 MS: 1 EraseBytes- 00:06:02.669 [2024-10-17 13:14:10.660215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.669 [2024-10-17 13:14:10.660241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.669 #23 NEW cov: 12395 ft: 15242 corp: 22/58b lim: 5 exec/s: 23 rss: 73Mb L: 1/5 MS: 1 CopyPart- 00:06:02.980 [2024-10-17 13:14:10.720542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.980 [2024-10-17 13:14:10.720568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.980 [2024-10-17 13:14:10.720624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.980 [2024-10-17 13:14:10.720640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.980 #24 NEW cov: 12395 ft: 15258 corp: 23/60b lim: 5 exec/s: 24 rss: 73Mb L: 2/5 MS: 1 ChangeBit- 00:06:02.980 [2024-10-17 13:14:10.760518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.980 [2024-10-17 13:14:10.760543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.980 #25 NEW cov: 12395 ft: 15272 corp: 24/61b lim: 5 exec/s: 25 rss: 73Mb L: 1/5 MS: 1 CopyPart- 00:06:02.980 [2024-10-17 13:14:10.811248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.980 [2024-10-17 13:14:10.811273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.980 [2024-10-17 13:14:10.811345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.980 [2024-10-17 13:14:10.811358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.980 [2024-10-17 13:14:10.811409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.980 [2024-10-17 13:14:10.811423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:02.980 [2024-10-17 13:14:10.811476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.980 [2024-10-17 13:14:10.811489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:02.980 [2024-10-17 13:14:10.811541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.980 [2024-10-17 13:14:10.811555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:02.980 #26 NEW cov: 12395 ft: 15352 corp: 25/66b lim: 5 exec/s: 26 rss: 73Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:02.980 [2024-10-17 13:14:10.870951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.980 [2024-10-17 13:14:10.870975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.980 [2024-10-17 13:14:10.871031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.980 [2024-10-17 13:14:10.871046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.980 #27 NEW cov: 12395 ft: 15367 corp: 26/68b lim: 5 exec/s: 27 rss: 74Mb L: 2/5 MS: 1 ChangeBit- 00:06:02.980 [2024-10-17 13:14:10.931284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.980 [2024-10-17 13:14:10.931320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.980 [2024-10-17 13:14:10.931390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.980 [2024-10-17 13:14:10.931404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.980 [2024-10-17 13:14:10.931456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.980 [2024-10-17 13:14:10.931469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:02.980 #28 NEW cov: 12395 ft: 15385 corp: 27/71b lim: 5 exec/s: 28 rss: 74Mb L: 3/5 MS: 1 InsertByte- 00:06:02.980 [2024-10-17 13:14:10.991130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.980 [2024-10-17 13:14:10.991162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.980 #29 NEW cov: 12395 ft: 15420 corp: 28/72b lim: 5 exec/s: 29 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:06:03.239 [2024-10-17 13:14:11.031357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.239 [2024-10-17 13:14:11.031383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:03.239 [2024-10-17 13:14:11.031439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.239 [2024-10-17 13:14:11.031452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:03.239 #30 NEW cov: 12395 ft: 15435 corp: 29/74b lim: 5 exec/s: 30 rss: 74Mb L: 2/5 MS: 1 EraseBytes- 00:06:03.239 [2024-10-17 13:14:11.091521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.239 [2024-10-17 13:14:11.091546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:03.239 [2024-10-17 13:14:11.091613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.239 [2024-10-17 13:14:11.091627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:03.239 #31 NEW cov: 12395 ft: 15446 corp: 30/76b lim: 5 exec/s: 31 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:06:03.239 [2024-10-17 13:14:11.131749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.239 [2024-10-17 13:14:11.131774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:03.239 [2024-10-17 13:14:11.131843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.239 [2024-10-17 13:14:11.131857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:03.239 [2024-10-17 13:14:11.131907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.239 [2024-10-17 13:14:11.131920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:03.239 #32 NEW cov: 12395 ft: 15455 corp: 31/79b lim: 5 exec/s: 32 rss: 74Mb L: 3/5 MS: 1 ChangeByte- 00:06:03.240 [2024-10-17 13:14:11.171649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.240 [2024-10-17 13:14:11.171675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:03.240 #33 NEW cov: 12395 ft: 15471 corp: 32/80b lim: 5 exec/s: 33 rss: 74Mb L: 1/5 MS: 1 ChangeBinInt- 00:06:03.240 [2024-10-17 13:14:11.212045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.240 [2024-10-17 13:14:11.212071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:03.240 [2024-10-17 13:14:11.212126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.240 [2024-10-17 13:14:11.212140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:03.240 [2024-10-17 13:14:11.212211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.240 [2024-10-17 13:14:11.212226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:03.240 #34 NEW cov: 12395 ft: 15545 corp: 33/83b lim: 5 exec/s: 34 rss: 74Mb L: 3/5 MS: 1 InsertByte- 00:06:03.240 [2024-10-17 13:14:11.252144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.240 [2024-10-17 13:14:11.252175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:03.240 [2024-10-17 13:14:11.252228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.240 [2024-10-17 13:14:11.252242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:03.240 [2024-10-17 13:14:11.252296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.240 [2024-10-17 13:14:11.252310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:03.499 #35 NEW cov: 12395 ft: 15556 corp: 34/86b lim: 5 exec/s: 35 rss: 75Mb L: 3/5 MS: 1 CopyPart- 00:06:03.499 [2024-10-17 13:14:11.312024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:03.499 [2024-10-17 13:14:11.312054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:03.499 #36 NEW cov: 12395 ft: 15567 corp: 35/87b lim: 5 exec/s: 18 rss: 75Mb L: 1/5 MS: 1 CrossOver- 00:06:03.499 #36 DONE cov: 12395 ft: 15567 corp: 35/87b lim: 5 exec/s: 18 rss: 75Mb 00:06:03.499 Done 36 runs in 2 second(s) 00:06:03.499 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:06:03.499 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:03.499 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:03.499 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:06:03.499 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:06:03.499 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:03.499 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:03.499 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:03.499 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:06:03.499 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:03.499 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:03.499 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:06:03.499 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:06:03.499 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:03.499 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:06:03.499 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:03.499 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:03.499 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:03.499 13:14:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:06:03.499 [2024-10-17 13:14:11.499488] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:03.499 [2024-10-17 13:14:11.499553] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3843862 ] 00:06:03.759 [2024-10-17 13:14:11.680816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.759 [2024-10-17 13:14:11.714235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.759 [2024-10-17 13:14:11.773236] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.759 [2024-10-17 13:14:11.789621] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:06:03.759 INFO: Running with entropic power schedule (0xFF, 100). 00:06:03.759 INFO: Seed: 144391471 00:06:04.018 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:06:04.018 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:06:04.018 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:04.018 INFO: A corpus is not provided, starting from an empty corpus 00:06:04.018 [2024-10-17 13:14:11.844988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.018 [2024-10-17 13:14:11.845022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.018 #2 INITED cov: 12168 ft: 12166 corp: 1/1b exec/s: 0 rss: 72Mb 00:06:04.018 [2024-10-17 13:14:11.885159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.018 [2024-10-17 13:14:11.885186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.018 [2024-10-17 13:14:11.885257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.018 [2024-10-17 13:14:11.885272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.018 #3 NEW cov: 12281 ft: 13403 corp: 2/3b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 InsertByte- 00:06:04.018 [2024-10-17 13:14:11.945698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.018 [2024-10-17 13:14:11.945725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.018 [2024-10-17 13:14:11.945798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.018 [2024-10-17 13:14:11.945812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.018 [2024-10-17 13:14:11.945868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.018 [2024-10-17 13:14:11.945880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.018 [2024-10-17 13:14:11.945935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.018 [2024-10-17 13:14:11.945949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:04.018 #4 NEW cov: 12287 ft: 13928 corp: 3/7b lim: 5 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:06:04.018 [2024-10-17 13:14:11.985908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.018 [2024-10-17 13:14:11.985934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.018 [2024-10-17 13:14:11.986005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.018 [2024-10-17 13:14:11.986019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.018 [2024-10-17 13:14:11.986073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.018 [2024-10-17 13:14:11.986087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.018 [2024-10-17 13:14:11.986142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.018 [2024-10-17 13:14:11.986164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:04.018 [2024-10-17 13:14:11.986219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.018 [2024-10-17 13:14:11.986236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:04.018 #5 NEW cov: 12372 ft: 14278 corp: 4/12b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:06:04.018 [2024-10-17 13:14:12.045657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.018 [2024-10-17 13:14:12.045683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.018 [2024-10-17 13:14:12.045737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.018 [2024-10-17 13:14:12.045753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.278 #6 NEW cov: 12372 ft: 14381 corp: 5/14b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:04.278 [2024-10-17 13:14:12.105826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.278 [2024-10-17 13:14:12.105852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.278 [2024-10-17 13:14:12.105927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.278 [2024-10-17 13:14:12.105941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.278 #7 NEW cov: 12372 ft: 14494 corp: 6/16b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:04.278 [2024-10-17 13:14:12.146060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.278 [2024-10-17 13:14:12.146085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.278 [2024-10-17 13:14:12.146167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.278 [2024-10-17 13:14:12.146182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.278 [2024-10-17 13:14:12.146240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.278 [2024-10-17 13:14:12.146253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.278 #8 NEW cov: 12372 ft: 14786 corp: 7/19b lim: 5 exec/s: 0 rss: 72Mb L: 3/5 MS: 1 InsertByte- 00:06:04.278 [2024-10-17 13:14:12.206059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.278 [2024-10-17 13:14:12.206084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.278 [2024-10-17 13:14:12.206168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.278 [2024-10-17 13:14:12.206182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.278 #9 NEW cov: 12372 ft: 14803 corp: 8/21b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:04.278 [2024-10-17 13:14:12.246164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.278 [2024-10-17 13:14:12.246193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.278 [2024-10-17 13:14:12.246265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.278 [2024-10-17 13:14:12.246279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.278 #10 NEW cov: 12372 ft: 14829 corp: 9/23b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 CopyPart- 00:06:04.278 [2024-10-17 13:14:12.286411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.278 [2024-10-17 13:14:12.286436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.278 [2024-10-17 13:14:12.286509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.278 [2024-10-17 13:14:12.286523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.278 [2024-10-17 13:14:12.286575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.278 [2024-10-17 13:14:12.286589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.278 #11 NEW cov: 12372 ft: 14860 corp: 10/26b lim: 5 exec/s: 0 rss: 72Mb L: 3/5 MS: 1 CopyPart- 00:06:04.278 [2024-10-17 13:14:12.326553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.278 [2024-10-17 13:14:12.326579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.278 [2024-10-17 13:14:12.326636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.278 [2024-10-17 13:14:12.326650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.278 [2024-10-17 13:14:12.326707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.278 [2024-10-17 13:14:12.326720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.538 #12 NEW cov: 12372 ft: 14878 corp: 11/29b lim: 5 exec/s: 0 rss: 72Mb L: 3/5 MS: 1 InsertByte- 00:06:04.538 [2024-10-17 13:14:12.386895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.538 [2024-10-17 13:14:12.386920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.538 [2024-10-17 13:14:12.386992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.538 [2024-10-17 13:14:12.387007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.538 [2024-10-17 13:14:12.387061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.538 [2024-10-17 13:14:12.387075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.538 [2024-10-17 13:14:12.387129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.538 [2024-10-17 13:14:12.387149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:04.538 #13 NEW cov: 12372 ft: 14890 corp: 12/33b lim: 5 exec/s: 0 rss: 73Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:04.538 [2024-10-17 13:14:12.426498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.538 [2024-10-17 13:14:12.426524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.538 #14 NEW cov: 12372 ft: 14945 corp: 13/34b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:06:04.538 [2024-10-17 13:14:12.467073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.538 [2024-10-17 13:14:12.467099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.538 [2024-10-17 13:14:12.467172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.538 [2024-10-17 13:14:12.467187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.538 [2024-10-17 13:14:12.467253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.538 [2024-10-17 13:14:12.467267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.538 [2024-10-17 13:14:12.467320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.538 [2024-10-17 13:14:12.467334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:04.538 #15 NEW cov: 12372 ft: 14967 corp: 14/38b lim: 5 exec/s: 0 rss: 73Mb L: 4/5 MS: 1 ShuffleBytes- 00:06:04.538 [2024-10-17 13:14:12.527092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.538 [2024-10-17 13:14:12.527119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.538 [2024-10-17 13:14:12.527178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.538 [2024-10-17 13:14:12.527193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.538 [2024-10-17 13:14:12.527251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.538 [2024-10-17 13:14:12.527265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.538 #16 NEW cov: 12372 ft: 14989 corp: 15/41b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 InsertByte- 00:06:04.538 [2024-10-17 13:14:12.587140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.538 [2024-10-17 13:14:12.587172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.538 [2024-10-17 13:14:12.587230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.538 [2024-10-17 13:14:12.587243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.798 #17 NEW cov: 12372 ft: 15018 corp: 16/43b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:04.798 [2024-10-17 13:14:12.627559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.798 [2024-10-17 13:14:12.627586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.798 [2024-10-17 13:14:12.627659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.798 [2024-10-17 13:14:12.627673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.798 [2024-10-17 13:14:12.627728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.798 [2024-10-17 13:14:12.627742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.798 [2024-10-17 13:14:12.627792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.798 [2024-10-17 13:14:12.627806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:04.798 #18 NEW cov: 12372 ft: 15028 corp: 17/47b lim: 5 exec/s: 0 rss: 73Mb L: 4/5 MS: 1 CrossOver- 00:06:04.798 [2024-10-17 13:14:12.667169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.798 [2024-10-17 13:14:12.667195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.798 #19 NEW cov: 12372 ft: 15086 corp: 18/48b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 EraseBytes- 00:06:04.798 [2024-10-17 13:14:12.727506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.798 [2024-10-17 13:14:12.727532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.798 [2024-10-17 13:14:12.727606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.798 [2024-10-17 13:14:12.727620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.057 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:05.057 #20 NEW cov: 12395 ft: 15122 corp: 19/50b lim: 5 exec/s: 20 rss: 74Mb L: 2/5 MS: 1 ChangeBit- 00:06:05.057 [2024-10-17 13:14:13.028928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.057 [2024-10-17 13:14:13.028980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.057 [2024-10-17 13:14:13.029073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.057 [2024-10-17 13:14:13.029098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.057 [2024-10-17 13:14:13.029182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.057 [2024-10-17 13:14:13.029207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.057 [2024-10-17 13:14:13.029291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.057 [2024-10-17 13:14:13.029317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:05.057 #21 NEW cov: 12395 ft: 15351 corp: 20/54b lim: 5 exec/s: 21 rss: 74Mb L: 4/5 MS: 1 CopyPart- 00:06:05.057 [2024-10-17 13:14:13.098610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.057 [2024-10-17 13:14:13.098636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.057 [2024-10-17 13:14:13.098708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.058 [2024-10-17 13:14:13.098722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.058 [2024-10-17 13:14:13.098776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.058 [2024-10-17 13:14:13.098789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.317 #22 NEW cov: 12395 ft: 15472 corp: 21/57b lim: 5 exec/s: 22 rss: 74Mb L: 3/5 MS: 1 ChangeBinInt- 00:06:05.317 [2024-10-17 13:14:13.158785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.317 [2024-10-17 13:14:13.158811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.317 [2024-10-17 13:14:13.158884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.317 [2024-10-17 13:14:13.158898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.317 [2024-10-17 13:14:13.158953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.317 [2024-10-17 13:14:13.158967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.317 #23 NEW cov: 12395 ft: 15481 corp: 22/60b lim: 5 exec/s: 23 rss: 75Mb L: 3/5 MS: 1 ChangeBinInt- 00:06:05.317 [2024-10-17 13:14:13.218942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.317 [2024-10-17 13:14:13.218967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.317 [2024-10-17 13:14:13.219041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.317 [2024-10-17 13:14:13.219055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.317 [2024-10-17 13:14:13.219109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.317 [2024-10-17 13:14:13.219123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.317 #24 NEW cov: 12395 ft: 15534 corp: 23/63b lim: 5 exec/s: 24 rss: 75Mb L: 3/5 MS: 1 CrossOver- 00:06:05.317 [2024-10-17 13:14:13.278921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.317 [2024-10-17 13:14:13.278950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.317 [2024-10-17 13:14:13.279024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.317 [2024-10-17 13:14:13.279038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.317 #25 NEW cov: 12395 ft: 15550 corp: 24/65b lim: 5 exec/s: 25 rss: 75Mb L: 2/5 MS: 1 EraseBytes- 00:06:05.317 [2024-10-17 13:14:13.339099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.317 [2024-10-17 13:14:13.339124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.317 [2024-10-17 13:14:13.339203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.317 [2024-10-17 13:14:13.339217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.317 #26 NEW cov: 12395 ft: 15557 corp: 25/67b lim: 5 exec/s: 26 rss: 75Mb L: 2/5 MS: 1 EraseBytes- 00:06:05.576 [2024-10-17 13:14:13.379381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.576 [2024-10-17 13:14:13.379406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.576 [2024-10-17 13:14:13.379461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.576 [2024-10-17 13:14:13.379475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.576 [2024-10-17 13:14:13.379530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.576 [2024-10-17 13:14:13.379543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.576 #27 NEW cov: 12395 ft: 15564 corp: 26/70b lim: 5 exec/s: 27 rss: 75Mb L: 3/5 MS: 1 ShuffleBytes- 00:06:05.576 [2024-10-17 13:14:13.419357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.576 [2024-10-17 13:14:13.419382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.576 [2024-10-17 13:14:13.419454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.576 [2024-10-17 13:14:13.419468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.576 #28 NEW cov: 12395 ft: 15575 corp: 27/72b lim: 5 exec/s: 28 rss: 75Mb L: 2/5 MS: 1 CopyPart- 00:06:05.576 [2024-10-17 13:14:13.479856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.576 [2024-10-17 13:14:13.479881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.576 [2024-10-17 13:14:13.479954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.576 [2024-10-17 13:14:13.479968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.576 [2024-10-17 13:14:13.480024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.576 [2024-10-17 13:14:13.480041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.576 [2024-10-17 13:14:13.480097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.576 [2024-10-17 13:14:13.480111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:05.576 #29 NEW cov: 12395 ft: 15597 corp: 28/76b lim: 5 exec/s: 29 rss: 75Mb L: 4/5 MS: 1 ChangeByte- 00:06:05.576 [2024-10-17 13:14:13.539835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.576 [2024-10-17 13:14:13.539860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.576 [2024-10-17 13:14:13.539933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.576 [2024-10-17 13:14:13.539946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.577 [2024-10-17 13:14:13.540001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.577 [2024-10-17 13:14:13.540015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.577 #30 NEW cov: 12395 ft: 15603 corp: 29/79b lim: 5 exec/s: 30 rss: 75Mb L: 3/5 MS: 1 ChangeBit- 00:06:05.577 [2024-10-17 13:14:13.600006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.577 [2024-10-17 13:14:13.600031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.577 [2024-10-17 13:14:13.600103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.577 [2024-10-17 13:14:13.600117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.577 [2024-10-17 13:14:13.600179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.577 [2024-10-17 13:14:13.600193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.836 #31 NEW cov: 12395 ft: 15608 corp: 30/82b lim: 5 exec/s: 31 rss: 75Mb L: 3/5 MS: 1 CrossOver- 00:06:05.836 [2024-10-17 13:14:13.660361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.836 [2024-10-17 13:14:13.660387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.836 [2024-10-17 13:14:13.660445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.836 [2024-10-17 13:14:13.660459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.836 [2024-10-17 13:14:13.660514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.836 [2024-10-17 13:14:13.660529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.836 [2024-10-17 13:14:13.660588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.836 [2024-10-17 13:14:13.660602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:05.836 #32 NEW cov: 12395 ft: 15613 corp: 31/86b lim: 5 exec/s: 32 rss: 75Mb L: 4/5 MS: 1 InsertByte- 00:06:05.836 [2024-10-17 13:14:13.700186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.836 [2024-10-17 13:14:13.700212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.837 [2024-10-17 13:14:13.700283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.837 [2024-10-17 13:14:13.700297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.837 #33 NEW cov: 12395 ft: 15651 corp: 32/88b lim: 5 exec/s: 33 rss: 75Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:05.837 [2024-10-17 13:14:13.760166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.837 [2024-10-17 13:14:13.760191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.837 #34 NEW cov: 12395 ft: 15665 corp: 33/89b lim: 5 exec/s: 34 rss: 75Mb L: 1/5 MS: 1 EraseBytes- 00:06:05.837 [2024-10-17 13:14:13.800559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.837 [2024-10-17 13:14:13.800584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.837 [2024-10-17 13:14:13.800657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.837 [2024-10-17 13:14:13.800672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.837 [2024-10-17 13:14:13.800729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.837 [2024-10-17 13:14:13.800743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.837 #35 NEW cov: 12395 ft: 15700 corp: 34/92b lim: 5 exec/s: 17 rss: 75Mb L: 3/5 MS: 1 ChangeBit- 00:06:05.837 #35 DONE cov: 12395 ft: 15700 corp: 34/92b lim: 5 exec/s: 17 rss: 75Mb 00:06:05.837 Done 35 runs in 2 second(s) 00:06:06.097 13:14:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:06:06.097 13:14:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:06.097 13:14:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:06.097 13:14:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:06:06.097 13:14:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:06:06.097 13:14:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:06.097 13:14:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:06.097 13:14:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:06.097 13:14:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:06:06.097 13:14:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:06.097 13:14:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:06.097 13:14:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:06:06.097 13:14:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:06:06.097 13:14:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:06.097 13:14:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:06:06.097 13:14:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:06.097 13:14:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:06.097 13:14:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:06.097 13:14:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:06:06.097 [2024-10-17 13:14:13.990330] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:06.097 [2024-10-17 13:14:13.990403] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844151 ] 00:06:06.357 [2024-10-17 13:14:14.185257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.357 [2024-10-17 13:14:14.220179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.357 [2024-10-17 13:14:14.279218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:06.357 [2024-10-17 13:14:14.295653] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:06:06.357 INFO: Running with entropic power schedule (0xFF, 100). 00:06:06.357 INFO: Seed: 2648384661 00:06:06.357 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:06:06.357 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:06:06.357 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:06.357 INFO: A corpus is not provided, starting from an empty corpus 00:06:06.357 #2 INITED exec/s: 0 rss: 65Mb 00:06:06.357 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:06.357 This may also happen if the target rejected all inputs we tried so far 00:06:06.357 [2024-10-17 13:14:14.365622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e5e5 cdw11:e5e5e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.357 [2024-10-17 13:14:14.365661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.875 NEW_FUNC[1/714]: 0x448a88 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:06:06.875 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:06.875 #22 NEW cov: 12189 ft: 12191 corp: 2/10b lim: 40 exec/s: 0 rss: 73Mb L: 9/9 MS: 5 CopyPart-ChangeBinInt-ChangeByte-ChangeBit-InsertRepeatedBytes- 00:06:06.875 [2024-10-17 13:14:14.706436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e5e5 cdw11:e527e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.875 [2024-10-17 13:14:14.706479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.875 #23 NEW cov: 12304 ft: 12669 corp: 3/19b lim: 40 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ChangeByte- 00:06:06.875 [2024-10-17 13:14:14.776638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e500 cdw11:e5e5e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.875 [2024-10-17 13:14:14.776669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.875 #24 NEW cov: 12310 ft: 13023 corp: 4/28b lim: 40 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ChangeByte- 00:06:06.875 [2024-10-17 13:14:14.826699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e5e5 cdw11:00e5e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.875 [2024-10-17 13:14:14.826726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.875 #25 NEW cov: 12395 ft: 13283 corp: 5/37b lim: 40 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ShuffleBytes- 00:06:06.875 [2024-10-17 13:14:14.896938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e5e5 cdw11:00e5e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.875 [2024-10-17 13:14:14.896964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.135 #26 NEW cov: 12395 ft: 13427 corp: 6/48b lim: 40 exec/s: 0 rss: 73Mb L: 11/11 MS: 1 CopyPart- 00:06:07.135 [2024-10-17 13:14:14.967096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e50f00 cdw11:e527e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.135 [2024-10-17 13:14:14.967123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.135 #27 NEW cov: 12395 ft: 13505 corp: 7/57b lim: 40 exec/s: 0 rss: 73Mb L: 9/11 MS: 1 CMP- DE: "\017\000"- 00:06:07.135 [2024-10-17 13:14:15.037439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e5e5 cdw11:000f00e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.135 [2024-10-17 13:14:15.037466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.135 #28 NEW cov: 12395 ft: 13584 corp: 8/72b lim: 40 exec/s: 0 rss: 73Mb L: 15/15 MS: 1 CrossOver- 00:06:07.135 [2024-10-17 13:14:15.107896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e5e5 cdw11:00e5e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.135 [2024-10-17 13:14:15.107925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.135 [2024-10-17 13:14:15.108062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e5e5e5e5 cdw11:00e5e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.135 [2024-10-17 13:14:15.108082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.135 #29 NEW cov: 12395 ft: 14017 corp: 9/88b lim: 40 exec/s: 0 rss: 73Mb L: 16/16 MS: 1 CrossOver- 00:06:07.135 [2024-10-17 13:14:15.157792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e50f00 cdw11:e527f5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.135 [2024-10-17 13:14:15.157820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.394 #30 NEW cov: 12395 ft: 14068 corp: 10/97b lim: 40 exec/s: 0 rss: 74Mb L: 9/16 MS: 1 ChangeBit- 00:06:07.394 [2024-10-17 13:14:15.227992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e5e5 cdw11:000f00e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.394 [2024-10-17 13:14:15.228020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.394 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:07.395 #36 NEW cov: 12418 ft: 14178 corp: 11/110b lim: 40 exec/s: 0 rss: 74Mb L: 13/16 MS: 1 PersAutoDict- DE: "\017\000"- 00:06:07.395 [2024-10-17 13:14:15.278107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e5e5 cdw11:00e57de5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.395 [2024-10-17 13:14:15.278135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.395 #37 NEW cov: 12418 ft: 14216 corp: 12/122b lim: 40 exec/s: 0 rss: 74Mb L: 12/16 MS: 1 InsertByte- 00:06:07.395 [2024-10-17 13:14:15.328526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e560 cdw11:00e5e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.395 [2024-10-17 13:14:15.328553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.395 [2024-10-17 13:14:15.328695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e5e5e5e5 cdw11:00e5e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.395 [2024-10-17 13:14:15.328715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.395 #48 NEW cov: 12418 ft: 14230 corp: 13/138b lim: 40 exec/s: 48 rss: 74Mb L: 16/16 MS: 1 ChangeByte- 00:06:07.395 [2024-10-17 13:14:15.398691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e5e5 cdw11:00e5e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.395 [2024-10-17 13:14:15.398718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.395 [2024-10-17 13:14:15.398850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e5e57ee5 cdw11:00e5e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.395 [2024-10-17 13:14:15.398867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.395 #49 NEW cov: 12418 ft: 14260 corp: 14/154b lim: 40 exec/s: 49 rss: 74Mb L: 16/16 MS: 1 ChangeByte- 00:06:07.654 [2024-10-17 13:14:15.448674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:05e5e505 cdw11:e5e500e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.654 [2024-10-17 13:14:15.448704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.654 #52 NEW cov: 12418 ft: 14306 corp: 15/162b lim: 40 exec/s: 52 rss: 74Mb L: 8/16 MS: 3 EraseBytes-ChangeBinInt-CopyPart- 00:06:07.654 [2024-10-17 13:14:15.498789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e5e5 cdw11:0027e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.654 [2024-10-17 13:14:15.498815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.654 #53 NEW cov: 12418 ft: 14333 corp: 16/171b lim: 40 exec/s: 53 rss: 74Mb L: 9/16 MS: 1 ChangeByte- 00:06:07.654 [2024-10-17 13:14:15.548923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e50f cdw11:00e5e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.654 [2024-10-17 13:14:15.548953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.654 #54 NEW cov: 12418 ft: 14364 corp: 17/180b lim: 40 exec/s: 54 rss: 74Mb L: 9/16 MS: 1 PersAutoDict- DE: "\017\000"- 00:06:07.654 [2024-10-17 13:14:15.599305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e560 cdw11:00e5e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.654 [2024-10-17 13:14:15.599335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.654 [2024-10-17 13:14:15.599488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e5e5e5e5 cdw11:00e5e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.654 [2024-10-17 13:14:15.599506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.654 #55 NEW cov: 12418 ft: 14380 corp: 18/196b lim: 40 exec/s: 55 rss: 74Mb L: 16/16 MS: 1 ShuffleBytes- 00:06:07.654 [2024-10-17 13:14:15.669297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e5e5 cdw11:000f00e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.654 [2024-10-17 13:14:15.669329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.914 #56 NEW cov: 12418 ft: 14391 corp: 19/208b lim: 40 exec/s: 56 rss: 74Mb L: 12/16 MS: 1 EraseBytes- 00:06:07.914 [2024-10-17 13:14:15.739765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e560 cdw11:00ee0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.914 [2024-10-17 13:14:15.739794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.914 [2024-10-17 13:14:15.739934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00e5e5e5 cdw11:00e5e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.914 [2024-10-17 13:14:15.739954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.914 #57 NEW cov: 12418 ft: 14419 corp: 20/224b lim: 40 exec/s: 57 rss: 74Mb L: 16/16 MS: 1 CMP- DE: "\356\000\000\000"- 00:06:07.914 [2024-10-17 13:14:15.809995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e560 cdw11:00ee0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.914 [2024-10-17 13:14:15.810023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.914 [2024-10-17 13:14:15.810156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00e5e5e5 cdw11:00e50100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.914 [2024-10-17 13:14:15.810174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.914 #58 NEW cov: 12418 ft: 14437 corp: 21/244b lim: 40 exec/s: 58 rss: 74Mb L: 20/20 MS: 1 CMP- DE: "\001\000\000\037"- 00:06:07.914 [2024-10-17 13:14:15.880029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d80f00e5 cdw11:27f5e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.914 [2024-10-17 13:14:15.880057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.914 #59 NEW cov: 12418 ft: 14447 corp: 22/252b lim: 40 exec/s: 59 rss: 74Mb L: 8/20 MS: 1 EraseBytes- 00:06:07.914 [2024-10-17 13:14:15.950641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e500 cdw11:e5e5e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.914 [2024-10-17 13:14:15.950669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.914 [2024-10-17 13:14:15.950820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8e5e560 cdw11:00ee0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.914 [2024-10-17 13:14:15.950838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.914 [2024-10-17 13:14:15.950978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00e5e5e5 cdw11:00e50100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.914 [2024-10-17 13:14:15.950997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.174 #60 NEW cov: 12418 ft: 14684 corp: 23/278b lim: 40 exec/s: 60 rss: 74Mb L: 26/26 MS: 1 CrossOver- 00:06:08.174 [2024-10-17 13:14:16.000253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e5e5 cdw11:000fe527 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.174 [2024-10-17 13:14:16.000282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.174 #65 NEW cov: 12418 ft: 14689 corp: 24/287b lim: 40 exec/s: 65 rss: 74Mb L: 9/26 MS: 5 EraseBytes-InsertByte-EraseBytes-EraseBytes-CrossOver- 00:06:08.174 [2024-10-17 13:14:16.050444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e5e5 cdw11:000f00e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.174 [2024-10-17 13:14:16.050473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.174 #66 NEW cov: 12418 ft: 14711 corp: 25/302b lim: 40 exec/s: 66 rss: 74Mb L: 15/26 MS: 1 ChangeBinInt- 00:06:08.174 [2024-10-17 13:14:16.120779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e5e5 cdw11:0fe527e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.174 [2024-10-17 13:14:16.120806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.174 #67 NEW cov: 12418 ft: 14742 corp: 26/310b lim: 40 exec/s: 67 rss: 74Mb L: 8/26 MS: 1 EraseBytes- 00:06:08.174 [2024-10-17 13:14:16.190858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e522 cdw11:0fe527e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.174 [2024-10-17 13:14:16.190884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.434 #68 NEW cov: 12418 ft: 14828 corp: 27/318b lim: 40 exec/s: 68 rss: 75Mb L: 8/26 MS: 1 ChangeBinInt- 00:06:08.434 [2024-10-17 13:14:16.261548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e500 cdw11:e5e5e5e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.434 [2024-10-17 13:14:16.261576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.434 [2024-10-17 13:14:16.261711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8e5e560 cdw11:00ee0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.434 [2024-10-17 13:14:16.261732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.434 [2024-10-17 13:14:16.261867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00e5e5e5 cdw11:00e5012d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.434 [2024-10-17 13:14:16.261886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.434 #69 NEW cov: 12418 ft: 14831 corp: 28/344b lim: 40 exec/s: 69 rss: 75Mb L: 26/26 MS: 1 ChangeByte- 00:06:08.434 [2024-10-17 13:14:16.331335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d8e5e500 cdw11:e5e527e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.434 [2024-10-17 13:14:16.331364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.434 #70 NEW cov: 12418 ft: 14841 corp: 29/355b lim: 40 exec/s: 35 rss: 75Mb L: 11/26 MS: 1 CrossOver- 00:06:08.434 #70 DONE cov: 12418 ft: 14841 corp: 29/355b lim: 40 exec/s: 35 rss: 75Mb 00:06:08.434 ###### Recommended dictionary. ###### 00:06:08.434 "\017\000" # Uses: 2 00:06:08.434 "\356\000\000\000" # Uses: 0 00:06:08.434 "\001\000\000\037" # Uses: 0 00:06:08.434 ###### End of recommended dictionary. ###### 00:06:08.434 Done 70 runs in 2 second(s) 00:06:08.434 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:06:08.434 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:08.434 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:08.434 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:06:08.434 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:06:08.434 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:08.434 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:08.434 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:08.434 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:06:08.434 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:08.434 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:08.434 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:06:08.434 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:06:08.434 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:08.434 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:06:08.434 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:08.434 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:08.434 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:08.434 13:14:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:06:08.695 [2024-10-17 13:14:16.499953] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:08.695 [2024-10-17 13:14:16.500031] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844689 ] 00:06:08.695 [2024-10-17 13:14:16.678458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.695 [2024-10-17 13:14:16.711690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.954 [2024-10-17 13:14:16.770754] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.954 [2024-10-17 13:14:16.787053] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:06:08.954 INFO: Running with entropic power schedule (0xFF, 100). 00:06:08.954 INFO: Seed: 844428364 00:06:08.954 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:06:08.954 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:06:08.954 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:08.954 INFO: A corpus is not provided, starting from an empty corpus 00:06:08.954 #2 INITED exec/s: 0 rss: 65Mb 00:06:08.954 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:08.954 This may also happen if the target rejected all inputs we tried so far 00:06:08.954 [2024-10-17 13:14:16.835793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.954 [2024-10-17 13:14:16.835821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.214 NEW_FUNC[1/715]: 0x44a7f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:06:09.214 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:09.214 #4 NEW cov: 12203 ft: 12201 corp: 2/12b lim: 40 exec/s: 0 rss: 73Mb L: 11/11 MS: 2 ChangeByte-InsertRepeatedBytes- 00:06:09.214 [2024-10-17 13:14:17.166780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.214 [2024-10-17 13:14:17.166817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.214 #5 NEW cov: 12316 ft: 12856 corp: 3/23b lim: 40 exec/s: 0 rss: 73Mb L: 11/11 MS: 1 CopyPart- 00:06:09.214 [2024-10-17 13:14:17.226840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000a00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.214 [2024-10-17 13:14:17.226867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.214 #6 NEW cov: 12322 ft: 13140 corp: 4/35b lim: 40 exec/s: 0 rss: 73Mb L: 12/12 MS: 1 CrossOver- 00:06:09.472 [2024-10-17 13:14:17.266910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a222222 cdw11:a2220077 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.472 [2024-10-17 13:14:17.266937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.472 #11 NEW cov: 12407 ft: 13370 corp: 5/46b lim: 40 exec/s: 0 rss: 73Mb L: 11/12 MS: 5 InsertRepeatedBytes-ShuffleBytes-ChangeBit-ChangeByte-InsertRepeatedBytes- 00:06:09.472 [2024-10-17 13:14:17.307045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.472 [2024-10-17 13:14:17.307071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.472 #12 NEW cov: 12407 ft: 13559 corp: 6/57b lim: 40 exec/s: 0 rss: 73Mb L: 11/12 MS: 1 ShuffleBytes- 00:06:09.472 [2024-10-17 13:14:17.347323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.472 [2024-10-17 13:14:17.347350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.472 [2024-10-17 13:14:17.347415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.472 [2024-10-17 13:14:17.347429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.472 #13 NEW cov: 12407 ft: 14296 corp: 7/76b lim: 40 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:06:09.472 [2024-10-17 13:14:17.387266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.472 [2024-10-17 13:14:17.387293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.472 #16 NEW cov: 12407 ft: 14336 corp: 8/86b lim: 40 exec/s: 0 rss: 73Mb L: 10/19 MS: 3 ChangeBit-CopyPart-CrossOver- 00:06:09.472 [2024-10-17 13:14:17.427532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.472 [2024-10-17 13:14:17.427558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.472 [2024-10-17 13:14:17.427622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.472 [2024-10-17 13:14:17.427637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.472 #17 NEW cov: 12407 ft: 14438 corp: 9/104b lim: 40 exec/s: 0 rss: 73Mb L: 18/19 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\000"- 00:06:09.472 [2024-10-17 13:14:17.487551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.472 [2024-10-17 13:14:17.487576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.472 #18 NEW cov: 12407 ft: 14551 corp: 10/114b lim: 40 exec/s: 0 rss: 73Mb L: 10/19 MS: 1 ChangeByte- 00:06:09.732 [2024-10-17 13:14:17.528025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:11111111 cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.732 [2024-10-17 13:14:17.528054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.732 [2024-10-17 13:14:17.528123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.732 [2024-10-17 13:14:17.528137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.732 [2024-10-17 13:14:17.528210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.732 [2024-10-17 13:14:17.528225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.732 #19 NEW cov: 12407 ft: 14881 corp: 11/145b lim: 40 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:06:09.732 [2024-10-17 13:14:17.567736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.732 [2024-10-17 13:14:17.567762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.732 #20 NEW cov: 12407 ft: 14894 corp: 12/157b lim: 40 exec/s: 0 rss: 73Mb L: 12/31 MS: 1 CrossOver- 00:06:09.732 [2024-10-17 13:14:17.628132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:fffffffb cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.732 [2024-10-17 13:14:17.628161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.732 [2024-10-17 13:14:17.628224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.732 [2024-10-17 13:14:17.628241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.732 #21 NEW cov: 12407 ft: 14957 corp: 13/175b lim: 40 exec/s: 0 rss: 73Mb L: 18/31 MS: 1 ChangeBit- 00:06:09.732 [2024-10-17 13:14:17.688318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.732 [2024-10-17 13:14:17.688343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.732 [2024-10-17 13:14:17.688420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.732 [2024-10-17 13:14:17.688435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.732 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:09.732 #22 NEW cov: 12430 ft: 15009 corp: 14/191b lim: 40 exec/s: 0 rss: 74Mb L: 16/31 MS: 1 EraseBytes- 00:06:09.732 [2024-10-17 13:14:17.748880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.732 [2024-10-17 13:14:17.748906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.732 [2024-10-17 13:14:17.748966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.732 [2024-10-17 13:14:17.748980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.732 [2024-10-17 13:14:17.749041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ff111111 cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.732 [2024-10-17 13:14:17.749056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.732 [2024-10-17 13:14:17.749118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:11111111 cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.732 [2024-10-17 13:14:17.749132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.732 #23 NEW cov: 12430 ft: 15364 corp: 15/225b lim: 40 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 CrossOver- 00:06:09.991 [2024-10-17 13:14:17.788747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:11111111 cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.991 [2024-10-17 13:14:17.788772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.991 [2024-10-17 13:14:17.788831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:01111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.991 [2024-10-17 13:14:17.788845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.991 [2024-10-17 13:14:17.788907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.991 [2024-10-17 13:14:17.788921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.991 #24 NEW cov: 12430 ft: 15371 corp: 16/256b lim: 40 exec/s: 24 rss: 74Mb L: 31/34 MS: 1 ChangeBit- 00:06:09.992 [2024-10-17 13:14:17.848817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0801 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.992 [2024-10-17 13:14:17.848843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.992 [2024-10-17 13:14:17.848903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.992 [2024-10-17 13:14:17.848918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.992 #25 NEW cov: 12430 ft: 15404 corp: 17/274b lim: 40 exec/s: 25 rss: 74Mb L: 18/34 MS: 1 ChangeBinInt- 00:06:09.992 [2024-10-17 13:14:17.889066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.992 [2024-10-17 13:14:17.889091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.992 [2024-10-17 13:14:17.889157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.992 [2024-10-17 13:14:17.889172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.992 [2024-10-17 13:14:17.889236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.992 [2024-10-17 13:14:17.889251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.992 #26 NEW cov: 12430 ft: 15438 corp: 18/305b lim: 40 exec/s: 26 rss: 74Mb L: 31/34 MS: 1 CrossOver- 00:06:09.992 [2024-10-17 13:14:17.928811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.992 [2024-10-17 13:14:17.928836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.992 #27 NEW cov: 12430 ft: 15471 corp: 19/319b lim: 40 exec/s: 27 rss: 74Mb L: 14/34 MS: 1 CrossOver- 00:06:09.992 [2024-10-17 13:14:17.969292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.992 [2024-10-17 13:14:17.969323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.992 [2024-10-17 13:14:17.969387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.992 [2024-10-17 13:14:17.969418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.992 [2024-10-17 13:14:17.969481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.992 [2024-10-17 13:14:17.969496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.992 #28 NEW cov: 12430 ft: 15513 corp: 20/350b lim: 40 exec/s: 28 rss: 74Mb L: 31/34 MS: 1 CopyPart- 00:06:09.992 [2024-10-17 13:14:18.029434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.992 [2024-10-17 13:14:18.029460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.992 [2024-10-17 13:14:18.029526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:11511111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.992 [2024-10-17 13:14:18.029541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.992 [2024-10-17 13:14:18.029603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:09.992 [2024-10-17 13:14:18.029616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.252 #29 NEW cov: 12430 ft: 15564 corp: 21/381b lim: 40 exec/s: 29 rss: 74Mb L: 31/34 MS: 1 ChangeBit- 00:06:10.252 [2024-10-17 13:14:18.089639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.252 [2024-10-17 13:14:18.089664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.252 [2024-10-17 13:14:18.089729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.252 [2024-10-17 13:14:18.089743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.252 [2024-10-17 13:14:18.089805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.252 [2024-10-17 13:14:18.089820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.252 #30 NEW cov: 12430 ft: 15596 corp: 22/412b lim: 40 exec/s: 30 rss: 74Mb L: 31/34 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\000"- 00:06:10.252 [2024-10-17 13:14:18.129584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.252 [2024-10-17 13:14:18.129610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.252 [2024-10-17 13:14:18.129673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:01000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.252 [2024-10-17 13:14:18.129687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.252 #31 NEW cov: 12430 ft: 15627 corp: 23/430b lim: 40 exec/s: 31 rss: 74Mb L: 18/34 MS: 1 CopyPart- 00:06:10.252 [2024-10-17 13:14:18.189735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000011 cdw11:111111ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.252 [2024-10-17 13:14:18.189761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.252 [2024-10-17 13:14:18.189822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.252 [2024-10-17 13:14:18.189836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.252 #32 NEW cov: 12430 ft: 15663 corp: 24/452b lim: 40 exec/s: 32 rss: 74Mb L: 22/34 MS: 1 EraseBytes- 00:06:10.252 [2024-10-17 13:14:18.229979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.252 [2024-10-17 13:14:18.230004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.252 [2024-10-17 13:14:18.230082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.252 [2024-10-17 13:14:18.230097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.252 [2024-10-17 13:14:18.230164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.252 [2024-10-17 13:14:18.230179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.252 #33 NEW cov: 12430 ft: 15693 corp: 25/480b lim: 40 exec/s: 33 rss: 74Mb L: 28/34 MS: 1 InsertRepeatedBytes- 00:06:10.252 [2024-10-17 13:14:18.270131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.252 [2024-10-17 13:14:18.270162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.252 [2024-10-17 13:14:18.270228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000a00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.252 [2024-10-17 13:14:18.270245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.252 [2024-10-17 13:14:18.270312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00002828 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.252 [2024-10-17 13:14:18.270326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.512 #34 NEW cov: 12430 ft: 15703 corp: 26/504b lim: 40 exec/s: 34 rss: 74Mb L: 24/34 MS: 1 CopyPart- 00:06:10.512 [2024-10-17 13:14:18.329920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a222222 cdw11:a2220077 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.512 [2024-10-17 13:14:18.329946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.512 #35 NEW cov: 12430 ft: 15747 corp: 27/514b lim: 40 exec/s: 35 rss: 74Mb L: 10/34 MS: 1 EraseBytes- 00:06:10.512 [2024-10-17 13:14:18.390314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:11111111 cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.512 [2024-10-17 13:14:18.390339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.512 [2024-10-17 13:14:18.390400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.512 [2024-10-17 13:14:18.390418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.512 #36 NEW cov: 12430 ft: 15771 corp: 28/537b lim: 40 exec/s: 36 rss: 74Mb L: 23/34 MS: 1 EraseBytes- 00:06:10.512 [2024-10-17 13:14:18.430597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.512 [2024-10-17 13:14:18.430622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.512 [2024-10-17 13:14:18.430686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.512 [2024-10-17 13:14:18.430700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.512 [2024-10-17 13:14:18.430762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff111119 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.512 [2024-10-17 13:14:18.430776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.512 #37 NEW cov: 12430 ft: 15789 corp: 29/568b lim: 40 exec/s: 37 rss: 74Mb L: 31/34 MS: 1 ChangeBit- 00:06:10.512 [2024-10-17 13:14:18.470838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.512 [2024-10-17 13:14:18.470863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.512 [2024-10-17 13:14:18.470925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.512 [2024-10-17 13:14:18.470940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.512 [2024-10-17 13:14:18.471000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ff111111 cdw11:15111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.512 [2024-10-17 13:14:18.471014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.512 [2024-10-17 13:14:18.471077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:11111111 cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.512 [2024-10-17 13:14:18.471091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.512 #38 NEW cov: 12430 ft: 15811 corp: 30/602b lim: 40 exec/s: 38 rss: 74Mb L: 34/34 MS: 1 ChangeBit- 00:06:10.512 [2024-10-17 13:14:18.530851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.512 [2024-10-17 13:14:18.530877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.512 [2024-10-17 13:14:18.530939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000a00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.512 [2024-10-17 13:14:18.530953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.512 [2024-10-17 13:14:18.531014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000028d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.512 [2024-10-17 13:14:18.531028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.772 #39 NEW cov: 12430 ft: 15876 corp: 31/626b lim: 40 exec/s: 39 rss: 75Mb L: 24/34 MS: 1 ChangeByte- 00:06:10.772 [2024-10-17 13:14:18.591254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.772 [2024-10-17 13:14:18.591284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.772 [2024-10-17 13:14:18.591349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00ffff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.772 [2024-10-17 13:14:18.591363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.772 [2024-10-17 13:14:18.591424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ff115111 cdw11:11ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.772 [2024-10-17 13:14:18.591438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.772 [2024-10-17 13:14:18.591499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffff1111 cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.772 [2024-10-17 13:14:18.591513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.772 #40 NEW cov: 12430 ft: 15902 corp: 32/662b lim: 40 exec/s: 40 rss: 75Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:06:10.772 [2024-10-17 13:14:18.651093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.772 [2024-10-17 13:14:18.651121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.772 [2024-10-17 13:14:18.651183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.772 [2024-10-17 13:14:18.651199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.772 #41 NEW cov: 12430 ft: 15916 corp: 33/685b lim: 40 exec/s: 41 rss: 75Mb L: 23/36 MS: 1 CopyPart- 00:06:10.772 [2024-10-17 13:14:18.711428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:11111111 cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.772 [2024-10-17 13:14:18.711455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.772 [2024-10-17 13:14:18.711519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:11111110 cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.772 [2024-10-17 13:14:18.711535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.772 [2024-10-17 13:14:18.711597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.772 [2024-10-17 13:14:18.711612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.772 #42 NEW cov: 12430 ft: 15958 corp: 34/716b lim: 40 exec/s: 42 rss: 75Mb L: 31/36 MS: 1 ChangeBit- 00:06:10.772 [2024-10-17 13:14:18.751685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.772 [2024-10-17 13:14:18.751711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.772 [2024-10-17 13:14:18.751774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.772 [2024-10-17 13:14:18.751791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.772 [2024-10-17 13:14:18.751851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00490000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.772 [2024-10-17 13:14:18.751869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.772 [2024-10-17 13:14:18.751929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.772 [2024-10-17 13:14:18.751942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.772 #43 NEW cov: 12430 ft: 15972 corp: 35/752b lim: 40 exec/s: 43 rss: 75Mb L: 36/36 MS: 1 CopyPart- 00:06:10.772 [2024-10-17 13:14:18.811513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.772 [2024-10-17 13:14:18.811539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.772 [2024-10-17 13:14:18.811602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff2d0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.772 [2024-10-17 13:14:18.811617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.032 #44 NEW cov: 12430 ft: 15990 corp: 36/768b lim: 40 exec/s: 22 rss: 75Mb L: 16/36 MS: 1 ChangeByte- 00:06:11.032 #44 DONE cov: 12430 ft: 15990 corp: 36/768b lim: 40 exec/s: 22 rss: 75Mb 00:06:11.032 ###### Recommended dictionary. ###### 00:06:11.032 "\377\377\377\377\377\377\377\000" # Uses: 1 00:06:11.032 ###### End of recommended dictionary. ###### 00:06:11.032 Done 44 runs in 2 second(s) 00:06:11.032 13:14:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:06:11.032 13:14:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:11.032 13:14:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:11.032 13:14:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:06:11.032 13:14:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:06:11.032 13:14:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:11.032 13:14:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:11.032 13:14:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:11.032 13:14:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:06:11.032 13:14:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:11.032 13:14:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:11.032 13:14:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:06:11.032 13:14:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:06:11.032 13:14:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:11.032 13:14:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:06:11.032 13:14:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:11.032 13:14:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:11.032 13:14:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:11.032 13:14:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:06:11.032 [2024-10-17 13:14:19.004587] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:11.032 [2024-10-17 13:14:19.004658] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845127 ] 00:06:11.291 [2024-10-17 13:14:19.188612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.291 [2024-10-17 13:14:19.222301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.291 [2024-10-17 13:14:19.281295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:11.291 [2024-10-17 13:14:19.297682] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:06:11.291 INFO: Running with entropic power schedule (0xFF, 100). 00:06:11.291 INFO: Seed: 3355414269 00:06:11.291 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:06:11.291 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:06:11.291 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:11.291 INFO: A corpus is not provided, starting from an empty corpus 00:06:11.291 #2 INITED exec/s: 0 rss: 65Mb 00:06:11.291 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:11.291 This may also happen if the target rejected all inputs we tried so far 00:06:11.550 [2024-10-17 13:14:19.347272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1b1b1b1b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.550 [2024-10-17 13:14:19.347301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.550 [2024-10-17 13:14:19.347357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1b1b1b1b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.550 [2024-10-17 13:14:19.347371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.550 [2024-10-17 13:14:19.347424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1b1b1b1b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.550 [2024-10-17 13:14:19.347438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.550 [2024-10-17 13:14:19.347491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:1b1b1b1b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.550 [2024-10-17 13:14:19.347505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.810 NEW_FUNC[1/715]: 0x44c568 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:06:11.810 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:11.810 #15 NEW cov: 12201 ft: 12199 corp: 2/39b lim: 40 exec/s: 0 rss: 73Mb L: 38/38 MS: 3 ChangeByte-ChangeByte-InsertRepeatedBytes- 00:06:11.810 [2024-10-17 13:14:19.668062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.810 [2024-10-17 13:14:19.668093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.810 [2024-10-17 13:14:19.668157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.810 [2024-10-17 13:14:19.668171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.810 [2024-10-17 13:14:19.668242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.810 [2024-10-17 13:14:19.668258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.810 [2024-10-17 13:14:19.668322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.810 [2024-10-17 13:14:19.668336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.810 #16 NEW cov: 12314 ft: 12840 corp: 3/78b lim: 40 exec/s: 0 rss: 73Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:06:11.810 [2024-10-17 13:14:19.707573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.810 [2024-10-17 13:14:19.707599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.810 #25 NEW cov: 12320 ft: 13817 corp: 4/86b lim: 40 exec/s: 0 rss: 73Mb L: 8/39 MS: 4 InsertRepeatedBytes-ShuffleBytes-ShuffleBytes-CopyPart- 00:06:11.810 [2024-10-17 13:14:19.748156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:6e000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.810 [2024-10-17 13:14:19.748182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.810 [2024-10-17 13:14:19.748237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.810 [2024-10-17 13:14:19.748252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.810 [2024-10-17 13:14:19.748307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.810 [2024-10-17 13:14:19.748321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.810 [2024-10-17 13:14:19.748375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.810 [2024-10-17 13:14:19.748389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.810 #30 NEW cov: 12405 ft: 14069 corp: 5/120b lim: 40 exec/s: 0 rss: 73Mb L: 34/39 MS: 5 ChangeByte-InsertByte-ChangeByte-EraseBytes-InsertRepeatedBytes- 00:06:11.810 [2024-10-17 13:14:19.788263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1b1b1b1b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.810 [2024-10-17 13:14:19.788288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.810 [2024-10-17 13:14:19.788346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1b1b1b1b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.810 [2024-10-17 13:14:19.788360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.810 [2024-10-17 13:14:19.788415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1b1b1b1b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.810 [2024-10-17 13:14:19.788429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.810 [2024-10-17 13:14:19.788483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0b1b1b1b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.810 [2024-10-17 13:14:19.788496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.810 #31 NEW cov: 12405 ft: 14161 corp: 6/158b lim: 40 exec/s: 0 rss: 73Mb L: 38/39 MS: 1 ChangeBit- 00:06:11.810 [2024-10-17 13:14:19.848009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000093 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.810 [2024-10-17 13:14:19.848034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.070 #35 NEW cov: 12405 ft: 14309 corp: 7/166b lim: 40 exec/s: 0 rss: 73Mb L: 8/39 MS: 4 EraseBytes-CopyPart-CopyPart-InsertByte- 00:06:12.070 [2024-10-17 13:14:19.908600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:6e000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.070 [2024-10-17 13:14:19.908626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.070 [2024-10-17 13:14:19.908685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.070 [2024-10-17 13:14:19.908699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.070 [2024-10-17 13:14:19.908754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.070 [2024-10-17 13:14:19.908768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.070 [2024-10-17 13:14:19.908824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.070 [2024-10-17 13:14:19.908838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.070 #36 NEW cov: 12405 ft: 14397 corp: 8/200b lim: 40 exec/s: 0 rss: 73Mb L: 34/39 MS: 1 ChangeBinInt- 00:06:12.070 [2024-10-17 13:14:19.968801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1b1b1b26 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.070 [2024-10-17 13:14:19.968827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.070 [2024-10-17 13:14:19.968882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000001b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.070 [2024-10-17 13:14:19.968895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.070 [2024-10-17 13:14:19.968949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1b1b1b1b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.070 [2024-10-17 13:14:19.968963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.070 [2024-10-17 13:14:19.969017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0b1b1b1b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.070 [2024-10-17 13:14:19.969031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.070 #37 NEW cov: 12405 ft: 14436 corp: 9/238b lim: 40 exec/s: 0 rss: 73Mb L: 38/39 MS: 1 ChangeBinInt- 00:06:12.070 [2024-10-17 13:14:20.028775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.070 [2024-10-17 13:14:20.028802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.070 [2024-10-17 13:14:20.028858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.070 [2024-10-17 13:14:20.028873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.070 [2024-10-17 13:14:20.028929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.070 [2024-10-17 13:14:20.028943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.070 #38 NEW cov: 12405 ft: 14717 corp: 10/269b lim: 40 exec/s: 0 rss: 73Mb L: 31/39 MS: 1 InsertRepeatedBytes- 00:06:12.070 [2024-10-17 13:14:20.069066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.070 [2024-10-17 13:14:20.069093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.070 [2024-10-17 13:14:20.069155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.070 [2024-10-17 13:14:20.069170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.070 [2024-10-17 13:14:20.069224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000700 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.070 [2024-10-17 13:14:20.069238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.070 [2024-10-17 13:14:20.069291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.070 [2024-10-17 13:14:20.069304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.070 #39 NEW cov: 12405 ft: 14765 corp: 11/308b lim: 40 exec/s: 0 rss: 74Mb L: 39/39 MS: 1 ChangeBinInt- 00:06:12.329 [2024-10-17 13:14:20.129216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.329 [2024-10-17 13:14:20.129243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.329 [2024-10-17 13:14:20.129298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.329 [2024-10-17 13:14:20.129313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.329 [2024-10-17 13:14:20.129366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000700 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.329 [2024-10-17 13:14:20.129380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.329 [2024-10-17 13:14:20.129433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.329 [2024-10-17 13:14:20.129447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.329 #40 NEW cov: 12405 ft: 14796 corp: 12/347b lim: 40 exec/s: 0 rss: 74Mb L: 39/39 MS: 1 ShuffleBytes- 00:06:12.329 [2024-10-17 13:14:20.189344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.329 [2024-10-17 13:14:20.189370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.329 [2024-10-17 13:14:20.189426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.329 [2024-10-17 13:14:20.189441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.329 [2024-10-17 13:14:20.189495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000700 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.329 [2024-10-17 13:14:20.189510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.329 [2024-10-17 13:14:20.189562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.329 [2024-10-17 13:14:20.189576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.329 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:12.329 #41 NEW cov: 12428 ft: 14840 corp: 13/384b lim: 40 exec/s: 0 rss: 74Mb L: 37/39 MS: 1 EraseBytes- 00:06:12.329 [2024-10-17 13:14:20.249040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:2d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.329 [2024-10-17 13:14:20.249065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.329 #42 NEW cov: 12428 ft: 14929 corp: 14/393b lim: 40 exec/s: 0 rss: 74Mb L: 9/39 MS: 1 InsertByte- 00:06:12.329 [2024-10-17 13:14:20.309650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.329 [2024-10-17 13:14:20.309676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.329 [2024-10-17 13:14:20.309731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.329 [2024-10-17 13:14:20.309745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.329 [2024-10-17 13:14:20.309799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.329 [2024-10-17 13:14:20.309813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.329 [2024-10-17 13:14:20.309868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.329 [2024-10-17 13:14:20.309882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.329 #43 NEW cov: 12428 ft: 14941 corp: 15/432b lim: 40 exec/s: 43 rss: 74Mb L: 39/39 MS: 1 ShuffleBytes- 00:06:12.329 [2024-10-17 13:14:20.349957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.329 [2024-10-17 13:14:20.349983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.329 [2024-10-17 13:14:20.350040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.329 [2024-10-17 13:14:20.350054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.329 [2024-10-17 13:14:20.350107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00e70000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.329 [2024-10-17 13:14:20.350122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.329 [2024-10-17 13:14:20.350194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.329 [2024-10-17 13:14:20.350212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.329 [2024-10-17 13:14:20.350276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.329 [2024-10-17 13:14:20.350290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:12.589 #44 NEW cov: 12428 ft: 15062 corp: 16/472b lim: 40 exec/s: 44 rss: 74Mb L: 40/40 MS: 1 InsertByte- 00:06:12.589 [2024-10-17 13:14:20.410120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1b1b1b26 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.589 [2024-10-17 13:14:20.410146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.589 [2024-10-17 13:14:20.410207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000001b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.589 [2024-10-17 13:14:20.410221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.589 [2024-10-17 13:14:20.410272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1b1b1b1b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.589 [2024-10-17 13:14:20.410286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.589 [2024-10-17 13:14:20.410339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0b1b1b1b cdw11:1b1b7a1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.589 [2024-10-17 13:14:20.410352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.589 [2024-10-17 13:14:20.410405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:1b1b1b1b cdw11:1b1b1b7a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.589 [2024-10-17 13:14:20.410420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:12.589 #45 NEW cov: 12428 ft: 15074 corp: 17/512b lim: 40 exec/s: 45 rss: 74Mb L: 40/40 MS: 1 CopyPart- 00:06:12.589 [2024-10-17 13:14:20.470305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.589 [2024-10-17 13:14:20.470331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.589 [2024-10-17 13:14:20.470385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.589 [2024-10-17 13:14:20.470399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.589 [2024-10-17 13:14:20.470454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00e70000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.590 [2024-10-17 13:14:20.470469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.590 [2024-10-17 13:14:20.470523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:000000f7 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.590 [2024-10-17 13:14:20.470537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.590 [2024-10-17 13:14:20.470588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.590 [2024-10-17 13:14:20.470605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:12.590 #46 NEW cov: 12428 ft: 15087 corp: 18/552b lim: 40 exec/s: 46 rss: 74Mb L: 40/40 MS: 1 ChangeBinInt- 00:06:12.590 [2024-10-17 13:14:20.530311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.590 [2024-10-17 13:14:20.530337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.590 [2024-10-17 13:14:20.530390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.590 [2024-10-17 13:14:20.530404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.590 [2024-10-17 13:14:20.530459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000700 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.590 [2024-10-17 13:14:20.530473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.590 [2024-10-17 13:14:20.530526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.590 [2024-10-17 13:14:20.530540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.590 #47 NEW cov: 12428 ft: 15104 corp: 19/589b lim: 40 exec/s: 47 rss: 74Mb L: 37/40 MS: 1 CrossOver- 00:06:12.590 [2024-10-17 13:14:20.590455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:6e000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.590 [2024-10-17 13:14:20.590481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.590 [2024-10-17 13:14:20.590536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.590 [2024-10-17 13:14:20.590550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.590 [2024-10-17 13:14:20.590602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.590 [2024-10-17 13:14:20.590616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.590 [2024-10-17 13:14:20.590671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00400000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.590 [2024-10-17 13:14:20.590684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.590 #48 NEW cov: 12428 ft: 15126 corp: 20/623b lim: 40 exec/s: 48 rss: 74Mb L: 34/40 MS: 1 ChangeBit- 00:06:12.590 [2024-10-17 13:14:20.630611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.590 [2024-10-17 13:14:20.630637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.590 [2024-10-17 13:14:20.630688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.590 [2024-10-17 13:14:20.630702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.590 [2024-10-17 13:14:20.630755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.590 [2024-10-17 13:14:20.630772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.590 [2024-10-17 13:14:20.630824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.590 [2024-10-17 13:14:20.630838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.849 #49 NEW cov: 12428 ft: 15131 corp: 21/662b lim: 40 exec/s: 49 rss: 74Mb L: 39/40 MS: 1 ChangeBit- 00:06:12.849 [2024-10-17 13:14:20.670514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.849 [2024-10-17 13:14:20.670540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.849 [2024-10-17 13:14:20.670597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.849 [2024-10-17 13:14:20.670611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.850 [2024-10-17 13:14:20.670665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.850 [2024-10-17 13:14:20.670679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.850 #50 NEW cov: 12428 ft: 15203 corp: 22/686b lim: 40 exec/s: 50 rss: 74Mb L: 24/40 MS: 1 EraseBytes- 00:06:12.850 [2024-10-17 13:14:20.710805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:6e000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.850 [2024-10-17 13:14:20.710831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.850 [2024-10-17 13:14:20.710888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.850 [2024-10-17 13:14:20.710902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.850 [2024-10-17 13:14:20.710954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.850 [2024-10-17 13:14:20.710969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.850 [2024-10-17 13:14:20.711023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:40000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.850 [2024-10-17 13:14:20.711037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.850 #51 NEW cov: 12428 ft: 15231 corp: 23/723b lim: 40 exec/s: 51 rss: 75Mb L: 37/40 MS: 1 InsertRepeatedBytes- 00:06:12.850 [2024-10-17 13:14:20.770956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:6e000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.850 [2024-10-17 13:14:20.770982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.850 [2024-10-17 13:14:20.771036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.850 [2024-10-17 13:14:20.771050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.850 [2024-10-17 13:14:20.771104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.850 [2024-10-17 13:14:20.771122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.850 [2024-10-17 13:14:20.771180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.850 [2024-10-17 13:14:20.771194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.850 #52 NEW cov: 12428 ft: 15288 corp: 24/757b lim: 40 exec/s: 52 rss: 75Mb L: 34/40 MS: 1 ChangeBinInt- 00:06:12.850 [2024-10-17 13:14:20.811067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1b1b1b26 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.850 [2024-10-17 13:14:20.811093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.850 [2024-10-17 13:14:20.811147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000001b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.850 [2024-10-17 13:14:20.811166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.850 [2024-10-17 13:14:20.811236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1b1b1b1b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.850 [2024-10-17 13:14:20.811251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.850 [2024-10-17 13:14:20.811304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0b1b1b1b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.850 [2024-10-17 13:14:20.811318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.850 #53 NEW cov: 12428 ft: 15299 corp: 25/796b lim: 40 exec/s: 53 rss: 75Mb L: 39/40 MS: 1 CopyPart- 00:06:12.850 [2024-10-17 13:14:20.851182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.850 [2024-10-17 13:14:20.851208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.850 [2024-10-17 13:14:20.851262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.850 [2024-10-17 13:14:20.851275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.850 [2024-10-17 13:14:20.851328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.850 [2024-10-17 13:14:20.851342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.850 [2024-10-17 13:14:20.851396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.850 [2024-10-17 13:14:20.851410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.850 #54 NEW cov: 12428 ft: 15307 corp: 26/835b lim: 40 exec/s: 54 rss: 75Mb L: 39/40 MS: 1 CrossOver- 00:06:13.109 [2024-10-17 13:14:20.911168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1b1b1b26 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.109 [2024-10-17 13:14:20.911193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.109 [2024-10-17 13:14:20.911247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000001b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.109 [2024-10-17 13:14:20.911264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.109 [2024-10-17 13:14:20.911318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1b1b1b1b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.109 [2024-10-17 13:14:20.911332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.109 #55 NEW cov: 12428 ft: 15321 corp: 27/861b lim: 40 exec/s: 55 rss: 75Mb L: 26/40 MS: 1 EraseBytes- 00:06:13.109 [2024-10-17 13:14:20.971567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.109 [2024-10-17 13:14:20.971593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.110 [2024-10-17 13:14:20.971647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.110 [2024-10-17 13:14:20.971661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.110 [2024-10-17 13:14:20.971714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000700 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.110 [2024-10-17 13:14:20.971729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.110 [2024-10-17 13:14:20.971780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.110 [2024-10-17 13:14:20.971794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.110 #56 NEW cov: 12428 ft: 15324 corp: 28/898b lim: 40 exec/s: 56 rss: 75Mb L: 37/40 MS: 1 ChangeBinInt- 00:06:13.110 [2024-10-17 13:14:21.031698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.110 [2024-10-17 13:14:21.031723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.110 [2024-10-17 13:14:21.031777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.110 [2024-10-17 13:14:21.031791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.110 [2024-10-17 13:14:21.031841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000700 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.110 [2024-10-17 13:14:21.031854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.110 [2024-10-17 13:14:21.031906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.110 [2024-10-17 13:14:21.031920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.110 #57 NEW cov: 12428 ft: 15359 corp: 29/935b lim: 40 exec/s: 57 rss: 75Mb L: 37/40 MS: 1 ShuffleBytes- 00:06:13.110 [2024-10-17 13:14:21.091894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.110 [2024-10-17 13:14:21.091919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.110 [2024-10-17 13:14:21.091973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.110 [2024-10-17 13:14:21.091990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.110 [2024-10-17 13:14:21.092044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.110 [2024-10-17 13:14:21.092058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.110 [2024-10-17 13:14:21.092111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0253bcb5 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.110 [2024-10-17 13:14:21.092124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.110 #58 NEW cov: 12428 ft: 15433 corp: 30/972b lim: 40 exec/s: 58 rss: 75Mb L: 37/40 MS: 1 CMP- DE: "\001\000\000\000\002S\274\265"- 00:06:13.110 [2024-10-17 13:14:21.152008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.110 [2024-10-17 13:14:21.152034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.110 [2024-10-17 13:14:21.152089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.110 [2024-10-17 13:14:21.152103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.110 [2024-10-17 13:14:21.152161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.110 [2024-10-17 13:14:21.152175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.110 [2024-10-17 13:14:21.152228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:e7000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.110 [2024-10-17 13:14:21.152242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.370 #59 NEW cov: 12428 ft: 15444 corp: 31/1004b lim: 40 exec/s: 59 rss: 75Mb L: 32/40 MS: 1 CrossOver- 00:06:13.370 [2024-10-17 13:14:21.212208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:6e010000 cdw11:000253bc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.370 [2024-10-17 13:14:21.212233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.370 [2024-10-17 13:14:21.212289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b5000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.370 [2024-10-17 13:14:21.212304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.370 [2024-10-17 13:14:21.212356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.370 [2024-10-17 13:14:21.212370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.370 [2024-10-17 13:14:21.212425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00400000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.370 [2024-10-17 13:14:21.212439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.370 #60 NEW cov: 12428 ft: 15469 corp: 32/1038b lim: 40 exec/s: 60 rss: 75Mb L: 34/40 MS: 1 PersAutoDict- DE: "\001\000\000\000\002S\274\265"- 00:06:13.370 [2024-10-17 13:14:21.251862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.370 [2024-10-17 13:14:21.251891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.370 #61 NEW cov: 12428 ft: 15471 corp: 33/1051b lim: 40 exec/s: 61 rss: 75Mb L: 13/40 MS: 1 CrossOver- 00:06:13.370 [2024-10-17 13:14:21.292253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.370 [2024-10-17 13:14:21.292279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.370 [2024-10-17 13:14:21.292334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.370 [2024-10-17 13:14:21.292349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.370 [2024-10-17 13:14:21.292402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.370 [2024-10-17 13:14:21.292416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.370 #62 NEW cov: 12428 ft: 15524 corp: 34/1082b lim: 40 exec/s: 62 rss: 75Mb L: 31/40 MS: 1 CopyPart- 00:06:13.370 [2024-10-17 13:14:21.332682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1b1b1b26 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.370 [2024-10-17 13:14:21.332706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.370 [2024-10-17 13:14:21.332761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000001b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.370 [2024-10-17 13:14:21.332775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.370 [2024-10-17 13:14:21.332831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1b1b1b1b cdw11:1b1b1b1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.370 [2024-10-17 13:14:21.332845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.370 [2024-10-17 13:14:21.332897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0b1b1b11 cdw11:1b1b7a1b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.370 [2024-10-17 13:14:21.332911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.370 [2024-10-17 13:14:21.332965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:1b1b1b1b cdw11:1b1b1b7a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.370 [2024-10-17 13:14:21.332978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:13.370 #63 NEW cov: 12428 ft: 15557 corp: 35/1122b lim: 40 exec/s: 31 rss: 75Mb L: 40/40 MS: 1 ChangeBinInt- 00:06:13.370 #63 DONE cov: 12428 ft: 15557 corp: 35/1122b lim: 40 exec/s: 31 rss: 75Mb 00:06:13.370 ###### Recommended dictionary. ###### 00:06:13.370 "\001\000\000\000\002S\274\265" # Uses: 1 00:06:13.370 ###### End of recommended dictionary. ###### 00:06:13.370 Done 63 runs in 2 second(s) 00:06:13.630 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:06:13.630 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:13.630 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:13.630 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:06:13.630 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:06:13.630 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:13.630 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:13.630 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:13.630 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:06:13.630 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:13.630 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:13.630 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:06:13.630 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:06:13.630 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:13.630 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:06:13.630 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:13.630 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:13.630 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:13.630 13:14:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:06:13.630 [2024-10-17 13:14:21.522608] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:13.630 [2024-10-17 13:14:21.522677] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845508 ] 00:06:13.889 [2024-10-17 13:14:21.701397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.889 [2024-10-17 13:14:21.735210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.889 [2024-10-17 13:14:21.794353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.889 [2024-10-17 13:14:21.810736] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:06:13.889 INFO: Running with entropic power schedule (0xFF, 100). 00:06:13.889 INFO: Seed: 1575452895 00:06:13.889 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:06:13.889 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:06:13.889 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:13.889 INFO: A corpus is not provided, starting from an empty corpus 00:06:13.889 #2 INITED exec/s: 0 rss: 65Mb 00:06:13.889 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:13.889 This may also happen if the target rejected all inputs we tried so far 00:06:13.889 [2024-10-17 13:14:21.887408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.889 [2024-10-17 13:14:21.887446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.889 [2024-10-17 13:14:21.887571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.889 [2024-10-17 13:14:21.887588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.889 [2024-10-17 13:14:21.887716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.889 [2024-10-17 13:14:21.887737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.148 NEW_FUNC[1/714]: 0x44e138 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:06:14.148 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:14.148 #3 NEW cov: 12172 ft: 12189 corp: 2/29b lim: 40 exec/s: 0 rss: 73Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:06:14.407 [2024-10-17 13:14:22.228384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.407 [2024-10-17 13:14:22.228433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.407 [2024-10-17 13:14:22.228575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.407 [2024-10-17 13:14:22.228596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.407 [2024-10-17 13:14:22.228731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.407 [2024-10-17 13:14:22.228751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.407 #4 NEW cov: 12302 ft: 12862 corp: 3/58b lim: 40 exec/s: 0 rss: 73Mb L: 29/29 MS: 1 InsertByte- 00:06:14.407 [2024-10-17 13:14:22.298460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.407 [2024-10-17 13:14:22.298492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.407 [2024-10-17 13:14:22.298623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.407 [2024-10-17 13:14:22.298644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.407 [2024-10-17 13:14:22.298782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00f90000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.407 [2024-10-17 13:14:22.298802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.407 #5 NEW cov: 12308 ft: 13111 corp: 4/87b lim: 40 exec/s: 0 rss: 73Mb L: 29/29 MS: 1 ChangeByte- 00:06:14.407 [2024-10-17 13:14:22.368336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.407 [2024-10-17 13:14:22.368366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.407 [2024-10-17 13:14:22.368492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.407 [2024-10-17 13:14:22.368511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.407 #6 NEW cov: 12393 ft: 13644 corp: 5/104b lim: 40 exec/s: 0 rss: 73Mb L: 17/29 MS: 1 CrossOver- 00:06:14.408 [2024-10-17 13:14:22.438808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.408 [2024-10-17 13:14:22.438836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.408 [2024-10-17 13:14:22.438964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:02000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.408 [2024-10-17 13:14:22.438984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.408 [2024-10-17 13:14:22.439117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00f90000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.408 [2024-10-17 13:14:22.439138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.667 #7 NEW cov: 12393 ft: 13756 corp: 6/133b lim: 40 exec/s: 0 rss: 73Mb L: 29/29 MS: 1 ChangeBit- 00:06:14.667 [2024-10-17 13:14:22.489045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.667 [2024-10-17 13:14:22.489075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.667 [2024-10-17 13:14:22.489206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.667 [2024-10-17 13:14:22.489224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.667 [2024-10-17 13:14:22.489373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.667 [2024-10-17 13:14:22.489391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.667 #8 NEW cov: 12393 ft: 13804 corp: 7/161b lim: 40 exec/s: 0 rss: 73Mb L: 28/29 MS: 1 ShuffleBytes- 00:06:14.667 [2024-10-17 13:14:22.538621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:07000000 cdw11:de000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.667 [2024-10-17 13:14:22.538649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.667 #11 NEW cov: 12393 ft: 14202 corp: 8/170b lim: 40 exec/s: 0 rss: 73Mb L: 9/29 MS: 3 InsertByte-InsertByte-CrossOver- 00:06:14.667 [2024-10-17 13:14:22.589266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.667 [2024-10-17 13:14:22.589293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.667 [2024-10-17 13:14:22.589430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.667 [2024-10-17 13:14:22.589450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.667 [2024-10-17 13:14:22.589582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.667 [2024-10-17 13:14:22.589599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.667 #12 NEW cov: 12393 ft: 14233 corp: 9/200b lim: 40 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 CopyPart- 00:06:14.667 [2024-10-17 13:14:22.639203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.667 [2024-10-17 13:14:22.639230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.667 [2024-10-17 13:14:22.639364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.667 [2024-10-17 13:14:22.639385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.667 #15 NEW cov: 12393 ft: 14270 corp: 10/217b lim: 40 exec/s: 0 rss: 73Mb L: 17/30 MS: 3 InsertByte-CopyPart-CrossOver- 00:06:14.667 [2024-10-17 13:14:22.689628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.668 [2024-10-17 13:14:22.689655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.668 [2024-10-17 13:14:22.689790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:80000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.668 [2024-10-17 13:14:22.689808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.668 [2024-10-17 13:14:22.689935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00f90000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.668 [2024-10-17 13:14:22.689952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.668 #21 NEW cov: 12393 ft: 14318 corp: 11/246b lim: 40 exec/s: 0 rss: 73Mb L: 29/30 MS: 1 ChangeBit- 00:06:14.927 [2024-10-17 13:14:22.740046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.927 [2024-10-17 13:14:22.740073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.928 [2024-10-17 13:14:22.740207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0c0c0c00 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.928 [2024-10-17 13:14:22.740225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.928 [2024-10-17 13:14:22.740365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.928 [2024-10-17 13:14:22.740385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.928 [2024-10-17 13:14:22.740522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:f900008b cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.928 [2024-10-17 13:14:22.740540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.928 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:14.928 #22 NEW cov: 12416 ft: 14841 corp: 12/278b lim: 40 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:06:14.928 [2024-10-17 13:14:22.809755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.928 [2024-10-17 13:14:22.809782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.928 [2024-10-17 13:14:22.809916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.928 [2024-10-17 13:14:22.809934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.928 #24 NEW cov: 12416 ft: 14873 corp: 13/296b lim: 40 exec/s: 0 rss: 74Mb L: 18/32 MS: 2 ChangeBit-CrossOver- 00:06:14.928 [2024-10-17 13:14:22.859688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:23000000 cdw11:de000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.928 [2024-10-17 13:14:22.859715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.928 #25 NEW cov: 12416 ft: 14939 corp: 14/305b lim: 40 exec/s: 25 rss: 74Mb L: 9/32 MS: 1 ChangeByte- 00:06:14.928 [2024-10-17 13:14:22.930527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:000000ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.928 [2024-10-17 13:14:22.930554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.928 [2024-10-17 13:14:22.930686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.928 [2024-10-17 13:14:22.930703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.928 [2024-10-17 13:14:22.930828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.928 [2024-10-17 13:14:22.930847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.928 [2024-10-17 13:14:22.930976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.928 [2024-10-17 13:14:22.930993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.928 #26 NEW cov: 12416 ft: 14972 corp: 15/341b lim: 40 exec/s: 26 rss: 74Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:06:15.187 [2024-10-17 13:14:22.980767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.187 [2024-10-17 13:14:22.980797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.187 [2024-10-17 13:14:22.980935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000c0c cdw11:0c000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.187 [2024-10-17 13:14:22.980954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.187 [2024-10-17 13:14:22.981083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00020000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.187 [2024-10-17 13:14:22.981101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.187 [2024-10-17 13:14:22.981238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.187 [2024-10-17 13:14:22.981255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.187 #27 NEW cov: 12416 ft: 14997 corp: 16/376b lim: 40 exec/s: 27 rss: 74Mb L: 35/36 MS: 1 CrossOver- 00:06:15.187 [2024-10-17 13:14:23.051048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:000000ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.187 [2024-10-17 13:14:23.051077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.187 [2024-10-17 13:14:23.051206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.187 [2024-10-17 13:14:23.051224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.188 [2024-10-17 13:14:23.051357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:08000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.188 [2024-10-17 13:14:23.051379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.188 [2024-10-17 13:14:23.051513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.188 [2024-10-17 13:14:23.051531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.188 #28 NEW cov: 12416 ft: 15009 corp: 17/412b lim: 40 exec/s: 28 rss: 74Mb L: 36/36 MS: 1 ChangeBit- 00:06:15.188 [2024-10-17 13:14:23.120467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:07000007 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.188 [2024-10-17 13:14:23.120495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.188 #29 NEW cov: 12416 ft: 15014 corp: 18/421b lim: 40 exec/s: 29 rss: 74Mb L: 9/36 MS: 1 CopyPart- 00:06:15.188 [2024-10-17 13:14:23.170599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:07000007 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.188 [2024-10-17 13:14:23.170628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.188 #30 NEW cov: 12416 ft: 15020 corp: 19/436b lim: 40 exec/s: 30 rss: 74Mb L: 15/36 MS: 1 CopyPart- 00:06:15.447 [2024-10-17 13:14:23.241376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00d90000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.447 [2024-10-17 13:14:23.241406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.447 [2024-10-17 13:14:23.241544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.447 [2024-10-17 13:14:23.241561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.447 [2024-10-17 13:14:23.241687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00f90000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.447 [2024-10-17 13:14:23.241707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.447 #31 NEW cov: 12416 ft: 15029 corp: 20/465b lim: 40 exec/s: 31 rss: 74Mb L: 29/36 MS: 1 ChangeByte- 00:06:15.447 [2024-10-17 13:14:23.291770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.447 [2024-10-17 13:14:23.291799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.447 [2024-10-17 13:14:23.291924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.447 [2024-10-17 13:14:23.291941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.447 [2024-10-17 13:14:23.292074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.447 [2024-10-17 13:14:23.292092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.447 [2024-10-17 13:14:23.292228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:0d0d0d0d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.447 [2024-10-17 13:14:23.292248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.447 #37 NEW cov: 12416 ft: 15036 corp: 21/499b lim: 40 exec/s: 37 rss: 74Mb L: 34/36 MS: 1 InsertRepeatedBytes- 00:06:15.447 [2024-10-17 13:14:23.341213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:23000000 cdw11:de00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.447 [2024-10-17 13:14:23.341244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.447 #38 NEW cov: 12416 ft: 15111 corp: 22/508b lim: 40 exec/s: 38 rss: 74Mb L: 9/36 MS: 1 CopyPart- 00:06:15.447 [2024-10-17 13:14:23.412115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.447 [2024-10-17 13:14:23.412143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.447 [2024-10-17 13:14:23.412274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.447 [2024-10-17 13:14:23.412293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.447 [2024-10-17 13:14:23.412426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.447 [2024-10-17 13:14:23.412443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.447 [2024-10-17 13:14:23.412566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:8b000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.447 [2024-10-17 13:14:23.412584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.447 #39 NEW cov: 12416 ft: 15180 corp: 23/541b lim: 40 exec/s: 39 rss: 74Mb L: 33/36 MS: 1 InsertRepeatedBytes- 00:06:15.447 [2024-10-17 13:14:23.462027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.447 [2024-10-17 13:14:23.462054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.447 [2024-10-17 13:14:23.462187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.447 [2024-10-17 13:14:23.462205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.447 [2024-10-17 13:14:23.462334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0000dd00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.447 [2024-10-17 13:14:23.462351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.707 #40 NEW cov: 12416 ft: 15219 corp: 24/571b lim: 40 exec/s: 40 rss: 74Mb L: 30/36 MS: 1 ChangeByte- 00:06:15.707 [2024-10-17 13:14:23.531694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:23de0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.708 [2024-10-17 13:14:23.531723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.708 #41 NEW cov: 12416 ft: 15228 corp: 25/580b lim: 40 exec/s: 41 rss: 74Mb L: 9/36 MS: 1 ShuffleBytes- 00:06:15.708 [2024-10-17 13:14:23.582563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.708 [2024-10-17 13:14:23.582593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.708 [2024-10-17 13:14:23.582733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.708 [2024-10-17 13:14:23.582754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.708 [2024-10-17 13:14:23.582888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0000fdfd cdw11:fdfdfdfd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.708 [2024-10-17 13:14:23.582905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.708 [2024-10-17 13:14:23.583027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:fdfd0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.708 [2024-10-17 13:14:23.583047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.708 #42 NEW cov: 12416 ft: 15250 corp: 26/616b lim: 40 exec/s: 42 rss: 74Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:06:15.708 [2024-10-17 13:14:23.652276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.708 [2024-10-17 13:14:23.652304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.708 [2024-10-17 13:14:23.652444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.708 [2024-10-17 13:14:23.652464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.708 #43 NEW cov: 12416 ft: 15258 corp: 27/634b lim: 40 exec/s: 43 rss: 74Mb L: 18/36 MS: 1 EraseBytes- 00:06:15.708 [2024-10-17 13:14:23.722962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.708 [2024-10-17 13:14:23.722991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.708 [2024-10-17 13:14:23.723124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.708 [2024-10-17 13:14:23.723142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.708 [2024-10-17 13:14:23.723273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.708 [2024-10-17 13:14:23.723292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.708 [2024-10-17 13:14:23.723427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:8b000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.708 [2024-10-17 13:14:23.723445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.968 #44 NEW cov: 12416 ft: 15279 corp: 28/667b lim: 40 exec/s: 44 rss: 75Mb L: 33/36 MS: 1 ShuffleBytes- 00:06:15.968 [2024-10-17 13:14:23.792683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0700003a cdw11:07000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.968 [2024-10-17 13:14:23.792711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.968 [2024-10-17 13:14:23.792841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:000a0000 cdw11:07000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.968 [2024-10-17 13:14:23.792859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.968 #45 NEW cov: 12416 ft: 15283 corp: 29/683b lim: 40 exec/s: 45 rss: 75Mb L: 16/36 MS: 1 InsertByte- 00:06:15.968 [2024-10-17 13:14:23.863330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.968 [2024-10-17 13:14:23.863357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.968 [2024-10-17 13:14:23.863487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000fd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.968 [2024-10-17 13:14:23.863506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.968 [2024-10-17 13:14:23.863634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fd000000 cdw11:fdfdfdfd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.968 [2024-10-17 13:14:23.863653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.968 [2024-10-17 13:14:23.863782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:fdfd0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.968 [2024-10-17 13:14:23.863798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.968 #46 NEW cov: 12416 ft: 15377 corp: 30/719b lim: 40 exec/s: 23 rss: 75Mb L: 36/36 MS: 1 ShuffleBytes- 00:06:15.968 #46 DONE cov: 12416 ft: 15377 corp: 30/719b lim: 40 exec/s: 23 rss: 75Mb 00:06:15.968 Done 46 runs in 2 second(s) 00:06:15.968 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:06:15.968 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:15.968 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:15.968 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:06:15.968 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:06:15.968 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:15.968 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:15.968 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:15.968 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:06:15.968 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:15.968 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:15.968 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:06:15.968 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:06:15.968 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:16.228 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:06:16.228 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:16.228 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:16.228 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:16.228 13:14:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:06:16.228 [2024-10-17 13:14:24.053633] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:16.228 [2024-10-17 13:14:24.053704] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846037 ] 00:06:16.228 [2024-10-17 13:14:24.231288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.228 [2024-10-17 13:14:24.264767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.488 [2024-10-17 13:14:24.323889] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.488 [2024-10-17 13:14:24.340226] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:06:16.488 INFO: Running with entropic power schedule (0xFF, 100). 00:06:16.488 INFO: Seed: 4104463029 00:06:16.488 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:06:16.488 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:06:16.488 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:16.488 INFO: A corpus is not provided, starting from an empty corpus 00:06:16.488 #2 INITED exec/s: 0 rss: 66Mb 00:06:16.488 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:16.488 This may also happen if the target rejected all inputs we tried so far 00:06:16.488 [2024-10-17 13:14:24.395809] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.488 [2024-10-17 13:14:24.395839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.748 NEW_FUNC[1/717]: 0x44fd08 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:06:16.748 NEW_FUNC[2/717]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:06:16.748 #7 NEW cov: 12215 ft: 12209 corp: 2/18b lim: 35 exec/s: 0 rss: 74Mb L: 17/17 MS: 5 ShuffleBytes-CrossOver-CrossOver-CopyPart-InsertRepeatedBytes- 00:06:16.748 #8 NEW cov: 12329 ft: 13366 corp: 3/31b lim: 35 exec/s: 0 rss: 74Mb L: 13/17 MS: 1 EraseBytes- 00:06:17.007 #19 NEW cov: 12335 ft: 13571 corp: 4/44b lim: 35 exec/s: 0 rss: 74Mb L: 13/17 MS: 1 ShuffleBytes- 00:06:17.008 [2024-10-17 13:14:24.836981] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.008 [2024-10-17 13:14:24.837011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.008 [2024-10-17 13:14:24.837132] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.008 [2024-10-17 13:14:24.837145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.008 #20 NEW cov: 12420 ft: 14098 corp: 5/66b lim: 35 exec/s: 0 rss: 74Mb L: 22/22 MS: 1 CrossOver- 00:06:17.008 #21 NEW cov: 12420 ft: 14229 corp: 6/79b lim: 35 exec/s: 0 rss: 74Mb L: 13/22 MS: 1 CrossOver- 00:06:17.008 #22 NEW cov: 12420 ft: 14293 corp: 7/92b lim: 35 exec/s: 0 rss: 74Mb L: 13/22 MS: 1 ShuffleBytes- 00:06:17.008 [2024-10-17 13:14:24.977396] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.008 [2024-10-17 13:14:24.977422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.008 [2024-10-17 13:14:24.977543] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.008 [2024-10-17 13:14:24.977557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.008 #23 NEW cov: 12420 ft: 14360 corp: 8/115b lim: 35 exec/s: 0 rss: 74Mb L: 23/23 MS: 1 InsertByte- 00:06:17.008 [2024-10-17 13:14:25.037866] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.008 [2024-10-17 13:14:25.037894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.008 [2024-10-17 13:14:25.037975] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.008 [2024-10-17 13:14:25.037991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.008 [2024-10-17 13:14:25.038050] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.008 [2024-10-17 13:14:25.038066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.008 [2024-10-17 13:14:25.038168] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.008 [2024-10-17 13:14:25.038183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:17.267 #24 NEW cov: 12427 ft: 14720 corp: 9/150b lim: 35 exec/s: 0 rss: 75Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:17.267 #25 NEW cov: 12427 ft: 14780 corp: 10/163b lim: 35 exec/s: 0 rss: 75Mb L: 13/35 MS: 1 ChangeBinInt- 00:06:17.267 #26 NEW cov: 12427 ft: 14862 corp: 11/171b lim: 35 exec/s: 0 rss: 75Mb L: 8/35 MS: 1 EraseBytes- 00:06:17.267 #30 NEW cov: 12427 ft: 14879 corp: 12/181b lim: 35 exec/s: 0 rss: 75Mb L: 10/35 MS: 4 CrossOver-CMP-ChangeByte-CMP- DE: "\001\013"-"\377\377\377\036"- 00:06:17.267 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:17.267 #31 NEW cov: 12450 ft: 14965 corp: 13/194b lim: 35 exec/s: 0 rss: 75Mb L: 13/35 MS: 1 ChangeBinInt- 00:06:17.267 [2024-10-17 13:14:25.298636] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.267 [2024-10-17 13:14:25.298664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.267 [2024-10-17 13:14:25.298725] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.267 [2024-10-17 13:14:25.298741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.267 [2024-10-17 13:14:25.298799] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.267 [2024-10-17 13:14:25.298814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.267 [2024-10-17 13:14:25.298908] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.267 [2024-10-17 13:14:25.298921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:17.527 #32 NEW cov: 12450 ft: 15031 corp: 14/229b lim: 35 exec/s: 0 rss: 75Mb L: 35/35 MS: 1 CrossOver- 00:06:17.527 [2024-10-17 13:14:25.358554] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.527 [2024-10-17 13:14:25.358580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.527 [2024-10-17 13:14:25.358679] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.527 [2024-10-17 13:14:25.358695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.527 #33 NEW cov: 12450 ft: 15042 corp: 15/252b lim: 35 exec/s: 33 rss: 75Mb L: 23/35 MS: 1 CrossOver- 00:06:17.527 [2024-10-17 13:14:25.398876] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.527 [2024-10-17 13:14:25.398906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.527 [2024-10-17 13:14:25.398967] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.527 [2024-10-17 13:14:25.398986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.527 [2024-10-17 13:14:25.399043] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.527 [2024-10-17 13:14:25.399061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.527 [2024-10-17 13:14:25.399159] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:000000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.527 [2024-10-17 13:14:25.399174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:17.527 #34 NEW cov: 12450 ft: 15073 corp: 16/287b lim: 35 exec/s: 34 rss: 75Mb L: 35/35 MS: 1 ChangeBit- 00:06:17.527 [2024-10-17 13:14:25.459141] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.527 [2024-10-17 13:14:25.459172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.527 [2024-10-17 13:14:25.459231] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.527 [2024-10-17 13:14:25.459248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.527 [2024-10-17 13:14:25.459344] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:000000d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.527 [2024-10-17 13:14:25.459359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:17.527 [2024-10-17 13:14:25.459420] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:000000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.527 [2024-10-17 13:14:25.459434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:17.527 #35 NEW cov: 12450 ft: 15165 corp: 17/322b lim: 35 exec/s: 35 rss: 75Mb L: 35/35 MS: 1 ShuffleBytes- 00:06:17.527 [2024-10-17 13:14:25.518586] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000027 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.527 [2024-10-17 13:14:25.518612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.527 #36 NEW cov: 12450 ft: 15176 corp: 18/335b lim: 35 exec/s: 36 rss: 75Mb L: 13/35 MS: 1 ChangeByte- 00:06:17.527 [2024-10-17 13:14:25.558910] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000027 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.527 [2024-10-17 13:14:25.558936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.527 [2024-10-17 13:14:25.558997] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.527 [2024-10-17 13:14:25.559012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.787 #37 NEW cov: 12450 ft: 15187 corp: 19/354b lim: 35 exec/s: 37 rss: 75Mb L: 19/35 MS: 1 CopyPart- 00:06:17.787 [2024-10-17 13:14:25.619095] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.787 [2024-10-17 13:14:25.619124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.787 #38 NEW cov: 12450 ft: 15202 corp: 20/371b lim: 35 exec/s: 38 rss: 75Mb L: 17/35 MS: 1 ChangeBit- 00:06:17.787 [2024-10-17 13:14:25.659683] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.787 [2024-10-17 13:14:25.659710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.787 [2024-10-17 13:14:25.659851] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:000000d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.787 [2024-10-17 13:14:25.659868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:17.787 [2024-10-17 13:14:25.659929] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:000000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.787 [2024-10-17 13:14:25.659943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:17.787 NEW_FUNC[1/2]: 0x46a6e8 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:06:17.787 NEW_FUNC[2/2]: 0x13431b8 in nvmf_ctrlr_set_features_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1604 00:06:17.787 #39 NEW cov: 12507 ft: 15380 corp: 21/406b lim: 35 exec/s: 39 rss: 75Mb L: 35/35 MS: 1 PersAutoDict- DE: "\001\013"- 00:06:17.787 [2024-10-17 13:14:25.719370] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.787 [2024-10-17 13:14:25.719397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.787 #40 NEW cov: 12507 ft: 15395 corp: 22/423b lim: 35 exec/s: 40 rss: 75Mb L: 17/35 MS: 1 ChangeByte- 00:06:17.787 [2024-10-17 13:14:25.779826] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.787 [2024-10-17 13:14:25.779851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.787 [2024-10-17 13:14:25.779952] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.787 [2024-10-17 13:14:25.779967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.787 [2024-10-17 13:14:25.780027] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.787 [2024-10-17 13:14:25.780042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:17.787 #46 NEW cov: 12507 ft: 15426 corp: 23/454b lim: 35 exec/s: 46 rss: 75Mb L: 31/35 MS: 1 CopyPart- 00:06:17.787 [2024-10-17 13:14:25.819743] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.787 [2024-10-17 13:14:25.819769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.787 [2024-10-17 13:14:25.819866] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.787 [2024-10-17 13:14:25.819880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.047 #47 NEW cov: 12507 ft: 15436 corp: 24/477b lim: 35 exec/s: 47 rss: 75Mb L: 23/35 MS: 1 ShuffleBytes- 00:06:18.047 [2024-10-17 13:14:25.859881] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.047 [2024-10-17 13:14:25.859907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.047 [2024-10-17 13:14:25.860008] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.047 [2024-10-17 13:14:25.860023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.047 #48 NEW cov: 12507 ft: 15450 corp: 25/500b lim: 35 exec/s: 48 rss: 76Mb L: 23/35 MS: 1 ShuffleBytes- 00:06:18.047 [2024-10-17 13:14:25.920367] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.047 [2024-10-17 13:14:25.920393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.047 [2024-10-17 13:14:25.920458] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.047 [2024-10-17 13:14:25.920475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.047 [2024-10-17 13:14:25.920532] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.047 [2024-10-17 13:14:25.920545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.047 [2024-10-17 13:14:25.920601] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.047 [2024-10-17 13:14:25.920616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.047 [2024-10-17 13:14:25.920670] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:000000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.047 [2024-10-17 13:14:25.920684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:18.047 #49 NEW cov: 12507 ft: 15466 corp: 26/535b lim: 35 exec/s: 49 rss: 76Mb L: 35/35 MS: 1 CopyPart- 00:06:18.047 [2024-10-17 13:14:25.960517] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.047 [2024-10-17 13:14:25.960542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.047 [2024-10-17 13:14:25.960600] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.047 [2024-10-17 13:14:25.960617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.047 [2024-10-17 13:14:25.960716] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:000000d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.047 [2024-10-17 13:14:25.960731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.047 [2024-10-17 13:14:25.960789] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:000000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.047 [2024-10-17 13:14:25.960803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:18.047 #50 NEW cov: 12507 ft: 15485 corp: 27/570b lim: 35 exec/s: 50 rss: 76Mb L: 35/35 MS: 1 PersAutoDict- DE: "\001\013"- 00:06:18.047 [2024-10-17 13:14:26.000565] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.047 [2024-10-17 13:14:26.000590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.047 [2024-10-17 13:14:26.000649] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:5 cdw10:8000000b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.047 [2024-10-17 13:14:26.000670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.047 [2024-10-17 13:14:26.000729] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.047 [2024-10-17 13:14:26.000745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.047 [2024-10-17 13:14:26.000844] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.047 [2024-10-17 13:14:26.000858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:18.047 NEW_FUNC[1/1]: 0x471728 in feat_async_event_cfg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:346 00:06:18.047 #51 NEW cov: 12607 ft: 15624 corp: 28/605b lim: 35 exec/s: 51 rss: 76Mb L: 35/35 MS: 1 PersAutoDict- DE: "\001\013"- 00:06:18.047 [2024-10-17 13:14:26.040691] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.047 [2024-10-17 13:14:26.040716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.047 [2024-10-17 13:14:26.040776] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.047 [2024-10-17 13:14:26.040793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.047 [2024-10-17 13:14:26.040890] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.047 [2024-10-17 13:14:26.040904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.048 [2024-10-17 13:14:26.040963] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.048 [2024-10-17 13:14:26.040977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:18.048 #52 NEW cov: 12607 ft: 15635 corp: 29/640b lim: 35 exec/s: 52 rss: 76Mb L: 35/35 MS: 1 CrossOver- 00:06:18.048 [2024-10-17 13:14:26.080298] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.048 [2024-10-17 13:14:26.080323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.048 [2024-10-17 13:14:26.080398] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.048 [2024-10-17 13:14:26.080413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.307 #53 NEW cov: 12607 ft: 15644 corp: 30/658b lim: 35 exec/s: 53 rss: 76Mb L: 18/35 MS: 1 EraseBytes- 00:06:18.307 #54 NEW cov: 12607 ft: 15655 corp: 31/671b lim: 35 exec/s: 54 rss: 76Mb L: 13/35 MS: 1 CrossOver- 00:06:18.307 #55 NEW cov: 12607 ft: 15673 corp: 32/683b lim: 35 exec/s: 55 rss: 76Mb L: 12/35 MS: 1 EraseBytes- 00:06:18.307 [2024-10-17 13:14:26.241324] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.307 [2024-10-17 13:14:26.241350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.307 [2024-10-17 13:14:26.241408] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.307 [2024-10-17 13:14:26.241423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.307 [2024-10-17 13:14:26.241516] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000001e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.307 [2024-10-17 13:14:26.241533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.307 [2024-10-17 13:14:26.241593] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:000000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.307 [2024-10-17 13:14:26.241606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:18.307 #56 NEW cov: 12607 ft: 15688 corp: 33/718b lim: 35 exec/s: 56 rss: 76Mb L: 35/35 MS: 1 PersAutoDict- DE: "\377\377\377\036"- 00:06:18.307 [2024-10-17 13:14:26.280735] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000027 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.307 [2024-10-17 13:14:26.280760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.307 #57 NEW cov: 12607 ft: 15723 corp: 34/728b lim: 35 exec/s: 57 rss: 76Mb L: 10/35 MS: 1 EraseBytes- 00:06:18.307 [2024-10-17 13:14:26.321021] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.307 [2024-10-17 13:14:26.321047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.307 #58 NEW cov: 12607 ft: 15731 corp: 35/742b lim: 35 exec/s: 58 rss: 76Mb L: 14/35 MS: 1 InsertByte- 00:06:18.567 [2024-10-17 13:14:26.361407] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:6 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.567 [2024-10-17 13:14:26.361436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.567 #59 NEW cov: 12607 ft: 15828 corp: 36/768b lim: 35 exec/s: 29 rss: 76Mb L: 26/35 MS: 1 CrossOver- 00:06:18.567 #59 DONE cov: 12607 ft: 15828 corp: 36/768b lim: 35 exec/s: 29 rss: 76Mb 00:06:18.567 ###### Recommended dictionary. ###### 00:06:18.567 "\001\013" # Uses: 4 00:06:18.567 "\377\377\377\036" # Uses: 1 00:06:18.567 ###### End of recommended dictionary. ###### 00:06:18.567 Done 59 runs in 2 second(s) 00:06:18.567 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:06:18.567 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:18.567 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:18.567 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:06:18.567 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:06:18.567 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:18.567 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:18.567 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:18.567 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:06:18.567 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:18.567 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:18.567 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:06:18.567 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:06:18.567 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:18.567 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:06:18.567 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:18.567 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:18.567 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:18.567 13:14:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:06:18.567 [2024-10-17 13:14:26.534476] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:18.567 [2024-10-17 13:14:26.534567] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846412 ] 00:06:18.827 [2024-10-17 13:14:26.718282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.827 [2024-10-17 13:14:26.752226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.827 [2024-10-17 13:14:26.811231] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.827 [2024-10-17 13:14:26.827624] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:06:18.827 INFO: Running with entropic power schedule (0xFF, 100). 00:06:18.827 INFO: Seed: 2297511844 00:06:18.827 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:06:18.827 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:06:18.827 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:18.827 INFO: A corpus is not provided, starting from an empty corpus 00:06:18.827 #2 INITED exec/s: 0 rss: 65Mb 00:06:18.827 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:18.827 This may also happen if the target rejected all inputs we tried so far 00:06:18.827 [2024-10-17 13:14:26.872534] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.827 [2024-10-17 13:14:26.872570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.827 [2024-10-17 13:14:26.872605] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.827 [2024-10-17 13:14:26.872622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.344 NEW_FUNC[1/715]: 0x451248 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:06:19.344 NEW_FUNC[2/715]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:06:19.344 #9 NEW cov: 12185 ft: 12173 corp: 2/24b lim: 35 exec/s: 0 rss: 73Mb L: 23/23 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:19.344 [2024-10-17 13:14:27.223438] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.344 [2024-10-17 13:14:27.223476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.344 [2024-10-17 13:14:27.223512] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.344 [2024-10-17 13:14:27.223529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.344 [2024-10-17 13:14:27.223559] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.344 [2024-10-17 13:14:27.223575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.344 #15 NEW cov: 12298 ft: 13168 corp: 3/53b lim: 35 exec/s: 0 rss: 73Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:06:19.344 [2024-10-17 13:14:27.313447] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.344 [2024-10-17 13:14:27.313482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.344 [2024-10-17 13:14:27.313532] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.344 [2024-10-17 13:14:27.313551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.344 #17 NEW cov: 12304 ft: 13624 corp: 4/68b lim: 35 exec/s: 0 rss: 73Mb L: 15/29 MS: 2 ChangeBit-CrossOver- 00:06:19.344 [2024-10-17 13:14:27.363598] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000072e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.344 [2024-10-17 13:14:27.363628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.603 #19 NEW cov: 12389 ft: 14080 corp: 5/85b lim: 35 exec/s: 0 rss: 73Mb L: 17/29 MS: 2 InsertByte-CrossOver- 00:06:19.603 [2024-10-17 13:14:27.423684] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.603 [2024-10-17 13:14:27.423715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.603 [2024-10-17 13:14:27.423764] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.603 [2024-10-17 13:14:27.423784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.603 #20 NEW cov: 12389 ft: 14163 corp: 6/100b lim: 35 exec/s: 0 rss: 73Mb L: 15/29 MS: 1 ChangeByte- 00:06:19.603 [2024-10-17 13:14:27.513882] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.603 [2024-10-17 13:14:27.513913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.603 #21 NEW cov: 12389 ft: 14470 corp: 7/108b lim: 35 exec/s: 0 rss: 73Mb L: 8/29 MS: 1 EraseBytes- 00:06:19.603 [2024-10-17 13:14:27.604093] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.603 [2024-10-17 13:14:27.604125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.862 #22 NEW cov: 12389 ft: 14567 corp: 8/116b lim: 35 exec/s: 0 rss: 73Mb L: 8/29 MS: 1 ChangeBit- 00:06:19.862 [2024-10-17 13:14:27.694483] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.862 [2024-10-17 13:14:27.694514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.862 [2024-10-17 13:14:27.694549] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.862 [2024-10-17 13:14:27.694565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.862 #23 NEW cov: 12389 ft: 14648 corp: 9/139b lim: 35 exec/s: 0 rss: 73Mb L: 23/29 MS: 1 CopyPart- 00:06:19.862 [2024-10-17 13:14:27.754592] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.862 [2024-10-17 13:14:27.754621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.862 [2024-10-17 13:14:27.754671] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.862 [2024-10-17 13:14:27.754692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.862 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:19.862 #24 NEW cov: 12412 ft: 14717 corp: 10/162b lim: 35 exec/s: 0 rss: 74Mb L: 23/29 MS: 1 CrossOver- 00:06:19.862 [2024-10-17 13:14:27.844916] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.862 [2024-10-17 13:14:27.844947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.862 [2024-10-17 13:14:27.844981] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.862 [2024-10-17 13:14:27.844997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.862 [2024-10-17 13:14:27.845031] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.862 [2024-10-17 13:14:27.845062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.122 #25 NEW cov: 12412 ft: 14800 corp: 11/192b lim: 35 exec/s: 25 rss: 74Mb L: 30/30 MS: 1 InsertByte- 00:06:20.122 [2024-10-17 13:14:27.935145] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000011a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.122 [2024-10-17 13:14:27.935184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.122 [2024-10-17 13:14:27.935219] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000012e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.122 [2024-10-17 13:14:27.935235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.122 [2024-10-17 13:14:27.935268] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000012e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.122 [2024-10-17 13:14:27.935284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.122 [2024-10-17 13:14:27.935314] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.122 [2024-10-17 13:14:27.935329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.122 #26 NEW cov: 12412 ft: 14948 corp: 12/225b lim: 35 exec/s: 26 rss: 74Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:06:20.122 [2024-10-17 13:14:27.995088] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000011a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.122 [2024-10-17 13:14:27.995117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.122 #27 NEW cov: 12412 ft: 14950 corp: 13/238b lim: 35 exec/s: 27 rss: 74Mb L: 13/33 MS: 1 CrossOver- 00:06:20.122 [2024-10-17 13:14:28.085384] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000011a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.122 [2024-10-17 13:14:28.085414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.122 [2024-10-17 13:14:28.085463] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000012e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.122 [2024-10-17 13:14:28.085485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.122 #28 NEW cov: 12412 ft: 15009 corp: 14/255b lim: 35 exec/s: 28 rss: 74Mb L: 17/33 MS: 1 CopyPart- 00:06:20.380 [2024-10-17 13:14:28.175763] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.380 [2024-10-17 13:14:28.175795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.380 [2024-10-17 13:14:28.175835] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000721 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.380 [2024-10-17 13:14:28.175851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.380 #29 NEW cov: 12412 ft: 15030 corp: 15/278b lim: 35 exec/s: 29 rss: 74Mb L: 23/33 MS: 1 ChangeByte- 00:06:20.380 [2024-10-17 13:14:28.256607] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.380 [2024-10-17 13:14:28.256635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.380 #30 NEW cov: 12412 ft: 15176 corp: 16/295b lim: 35 exec/s: 30 rss: 74Mb L: 17/33 MS: 1 EraseBytes- 00:06:20.380 [2024-10-17 13:14:28.296678] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000011a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.380 [2024-10-17 13:14:28.296704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.380 [2024-10-17 13:14:28.296781] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000012e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.380 [2024-10-17 13:14:28.296795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.380 #31 NEW cov: 12412 ft: 15232 corp: 17/311b lim: 35 exec/s: 31 rss: 74Mb L: 16/33 MS: 1 CrossOver- 00:06:20.380 [2024-10-17 13:14:28.356832] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.380 [2024-10-17 13:14:28.356857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.380 [2024-10-17 13:14:28.356919] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000741 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.380 [2024-10-17 13:14:28.356933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.380 #32 NEW cov: 12412 ft: 15260 corp: 18/329b lim: 35 exec/s: 32 rss: 74Mb L: 18/33 MS: 1 InsertByte- 00:06:20.380 [2024-10-17 13:14:28.416995] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.380 [2024-10-17 13:14:28.417021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.380 [2024-10-17 13:14:28.417095] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.380 [2024-10-17 13:14:28.417109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.639 #33 NEW cov: 12412 ft: 15351 corp: 19/344b lim: 35 exec/s: 33 rss: 74Mb L: 15/33 MS: 1 ChangeByte- 00:06:20.639 [2024-10-17 13:14:28.457084] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000011a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.639 [2024-10-17 13:14:28.457110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.639 [2024-10-17 13:14:28.457171] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000012e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.639 [2024-10-17 13:14:28.457185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.639 #34 NEW cov: 12412 ft: 15371 corp: 20/361b lim: 35 exec/s: 34 rss: 74Mb L: 17/33 MS: 1 ChangeBit- 00:06:20.639 [2024-10-17 13:14:28.517327] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000072e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.639 [2024-10-17 13:14:28.517352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.639 #35 NEW cov: 12412 ft: 15385 corp: 21/379b lim: 35 exec/s: 35 rss: 74Mb L: 18/33 MS: 1 InsertByte- 00:06:20.639 [2024-10-17 13:14:28.577291] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.639 [2024-10-17 13:14:28.577316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.639 #36 NEW cov: 12412 ft: 15415 corp: 22/388b lim: 35 exec/s: 36 rss: 74Mb L: 9/33 MS: 1 EraseBytes- 00:06:20.639 [2024-10-17 13:14:28.617401] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.639 [2024-10-17 13:14:28.617426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.639 #37 NEW cov: 12412 ft: 15478 corp: 23/397b lim: 35 exec/s: 37 rss: 74Mb L: 9/33 MS: 1 InsertByte- 00:06:20.639 [2024-10-17 13:14:28.657981] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.639 [2024-10-17 13:14:28.658006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.639 [2024-10-17 13:14:28.658066] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.639 [2024-10-17 13:14:28.658080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.639 [2024-10-17 13:14:28.658137] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.639 [2024-10-17 13:14:28.658155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.639 #38 NEW cov: 12412 ft: 15504 corp: 24/428b lim: 35 exec/s: 38 rss: 74Mb L: 31/33 MS: 1 InsertRepeatedBytes- 00:06:20.899 [2024-10-17 13:14:28.697763] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.899 [2024-10-17 13:14:28.697788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.899 [2024-10-17 13:14:28.697863] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.899 [2024-10-17 13:14:28.697877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.899 #39 NEW cov: 12412 ft: 15541 corp: 25/443b lim: 35 exec/s: 39 rss: 74Mb L: 15/33 MS: 1 ShuffleBytes- 00:06:20.899 [2024-10-17 13:14:28.737750] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.899 [2024-10-17 13:14:28.737775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.899 #40 NEW cov: 12412 ft: 15556 corp: 26/451b lim: 35 exec/s: 40 rss: 74Mb L: 8/33 MS: 1 ShuffleBytes- 00:06:20.899 [2024-10-17 13:14:28.778304] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.899 [2024-10-17 13:14:28.778330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.899 [2024-10-17 13:14:28.778404] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.899 [2024-10-17 13:14:28.778418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.899 [2024-10-17 13:14:28.778474] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.899 [2024-10-17 13:14:28.778490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.899 #41 NEW cov: 12412 ft: 15566 corp: 27/480b lim: 35 exec/s: 41 rss: 74Mb L: 29/33 MS: 1 ChangeByte- 00:06:20.899 [2024-10-17 13:14:28.817990] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.899 [2024-10-17 13:14:28.818025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.899 #42 NEW cov: 12412 ft: 15578 corp: 28/491b lim: 35 exec/s: 42 rss: 74Mb L: 11/33 MS: 1 CopyPart- 00:06:20.899 [2024-10-17 13:14:28.858105] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.899 [2024-10-17 13:14:28.858130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.899 #43 NEW cov: 12412 ft: 15591 corp: 29/498b lim: 35 exec/s: 21 rss: 74Mb L: 7/33 MS: 1 EraseBytes- 00:06:20.899 #43 DONE cov: 12412 ft: 15591 corp: 29/498b lim: 35 exec/s: 21 rss: 74Mb 00:06:20.899 Done 43 runs in 2 second(s) 00:06:21.159 13:14:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:06:21.159 13:14:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:21.159 13:14:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:21.159 13:14:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:06:21.159 13:14:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:06:21.159 13:14:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:21.159 13:14:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:21.159 13:14:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:21.159 13:14:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:06:21.159 13:14:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:21.159 13:14:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:21.159 13:14:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:06:21.159 13:14:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:06:21.159 13:14:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:21.159 13:14:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:06:21.159 13:14:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:21.159 13:14:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:21.159 13:14:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:21.159 13:14:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:06:21.159 [2024-10-17 13:14:29.026555] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:21.159 [2024-10-17 13:14:29.026627] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846863 ] 00:06:21.159 [2024-10-17 13:14:29.200117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.418 [2024-10-17 13:14:29.234570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.418 [2024-10-17 13:14:29.293252] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.419 [2024-10-17 13:14:29.309622] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:06:21.419 INFO: Running with entropic power schedule (0xFF, 100). 00:06:21.419 INFO: Seed: 484515647 00:06:21.419 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:06:21.419 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:06:21.419 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:21.419 INFO: A corpus is not provided, starting from an empty corpus 00:06:21.419 #2 INITED exec/s: 0 rss: 65Mb 00:06:21.419 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:21.419 This may also happen if the target rejected all inputs we tried so far 00:06:21.419 [2024-10-17 13:14:29.354369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.419 [2024-10-17 13:14:29.354403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:21.419 [2024-10-17 13:14:29.354438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.419 [2024-10-17 13:14:29.354456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:21.678 NEW_FUNC[1/715]: 0x452708 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:06:21.678 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:21.678 #28 NEW cov: 12275 ft: 12273 corp: 2/60b lim: 105 exec/s: 0 rss: 73Mb L: 59/59 MS: 1 InsertRepeatedBytes- 00:06:21.678 [2024-10-17 13:14:29.705238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.678 [2024-10-17 13:14:29.705275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:21.938 #29 NEW cov: 12388 ft: 13157 corp: 3/100b lim: 105 exec/s: 0 rss: 73Mb L: 40/59 MS: 1 EraseBytes- 00:06:21.938 [2024-10-17 13:14:29.795360] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.938 [2024-10-17 13:14:29.795391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:21.938 #30 NEW cov: 12394 ft: 13533 corp: 4/140b lim: 105 exec/s: 0 rss: 73Mb L: 40/59 MS: 1 ChangeBinInt- 00:06:21.938 [2024-10-17 13:14:29.885679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.938 [2024-10-17 13:14:29.885712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:21.938 [2024-10-17 13:14:29.885747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.938 [2024-10-17 13:14:29.885765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:21.938 [2024-10-17 13:14:29.885797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.938 [2024-10-17 13:14:29.885814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:21.938 #31 NEW cov: 12479 ft: 14098 corp: 5/203b lim: 105 exec/s: 0 rss: 73Mb L: 63/63 MS: 1 CopyPart- 00:06:21.938 [2024-10-17 13:14:29.945841] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.938 [2024-10-17 13:14:29.945877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:21.938 [2024-10-17 13:14:29.945911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.938 [2024-10-17 13:14:29.945929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:21.938 [2024-10-17 13:14:29.945961] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.938 [2024-10-17 13:14:29.945978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:22.197 #32 NEW cov: 12479 ft: 14252 corp: 6/266b lim: 105 exec/s: 0 rss: 73Mb L: 63/63 MS: 1 ChangeBinInt- 00:06:22.197 [2024-10-17 13:14:30.046059] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.197 [2024-10-17 13:14:30.046091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:22.197 [2024-10-17 13:14:30.046141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.197 [2024-10-17 13:14:30.046174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:22.197 #33 NEW cov: 12479 ft: 14339 corp: 7/313b lim: 105 exec/s: 0 rss: 73Mb L: 47/63 MS: 1 CopyPart- 00:06:22.197 [2024-10-17 13:14:30.136331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.197 [2024-10-17 13:14:30.136368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:22.197 [2024-10-17 13:14:30.136404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:256 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.197 [2024-10-17 13:14:30.136422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:22.197 #34 NEW cov: 12479 ft: 14388 corp: 8/360b lim: 105 exec/s: 0 rss: 73Mb L: 47/63 MS: 1 ChangeByte- 00:06:22.197 [2024-10-17 13:14:30.236578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.197 [2024-10-17 13:14:30.236610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:22.197 [2024-10-17 13:14:30.236646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.197 [2024-10-17 13:14:30.236664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:22.456 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:22.456 #35 NEW cov: 12496 ft: 14420 corp: 9/407b lim: 105 exec/s: 0 rss: 74Mb L: 47/63 MS: 1 ChangeByte- 00:06:22.456 [2024-10-17 13:14:30.296683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.456 [2024-10-17 13:14:30.296713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:22.456 [2024-10-17 13:14:30.296747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:47104 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.456 [2024-10-17 13:14:30.296764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:22.456 #36 NEW cov: 12496 ft: 14467 corp: 10/467b lim: 105 exec/s: 0 rss: 74Mb L: 60/63 MS: 1 InsertByte- 00:06:22.456 [2024-10-17 13:14:30.356826] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.456 [2024-10-17 13:14:30.356857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:22.456 #37 NEW cov: 12496 ft: 14527 corp: 11/507b lim: 105 exec/s: 37 rss: 74Mb L: 40/63 MS: 1 CrossOver- 00:06:22.456 [2024-10-17 13:14:30.416952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.456 [2024-10-17 13:14:30.416983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:22.456 #38 NEW cov: 12496 ft: 14542 corp: 12/548b lim: 105 exec/s: 38 rss: 74Mb L: 41/63 MS: 1 InsertByte- 00:06:22.456 [2024-10-17 13:14:30.477186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.456 [2024-10-17 13:14:30.477217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:22.456 [2024-10-17 13:14:30.477252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:47104 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.456 [2024-10-17 13:14:30.477269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:22.716 #39 NEW cov: 12496 ft: 14566 corp: 13/608b lim: 105 exec/s: 39 rss: 74Mb L: 60/63 MS: 1 CopyPart- 00:06:22.716 [2024-10-17 13:14:30.567416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.716 [2024-10-17 13:14:30.567448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:22.716 [2024-10-17 13:14:30.567483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.716 [2024-10-17 13:14:30.567501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:22.716 #40 NEW cov: 12496 ft: 14612 corp: 14/650b lim: 105 exec/s: 40 rss: 74Mb L: 42/63 MS: 1 InsertByte- 00:06:22.716 [2024-10-17 13:14:30.667709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.716 [2024-10-17 13:14:30.667741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:22.716 [2024-10-17 13:14:30.667776] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.716 [2024-10-17 13:14:30.667795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:22.716 #41 NEW cov: 12496 ft: 14633 corp: 15/692b lim: 105 exec/s: 41 rss: 74Mb L: 42/63 MS: 1 CMP- DE: "\001\000"- 00:06:22.716 [2024-10-17 13:14:30.727864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4294967040 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.716 [2024-10-17 13:14:30.727895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:22.716 [2024-10-17 13:14:30.727930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.716 [2024-10-17 13:14:30.727948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:22.716 #42 NEW cov: 12496 ft: 14651 corp: 16/738b lim: 105 exec/s: 42 rss: 74Mb L: 46/63 MS: 1 InsertRepeatedBytes- 00:06:22.976 [2024-10-17 13:14:30.777984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.976 [2024-10-17 13:14:30.778015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:22.976 [2024-10-17 13:14:30.778049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.976 [2024-10-17 13:14:30.778067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:22.976 #43 NEW cov: 12496 ft: 14659 corp: 17/785b lim: 105 exec/s: 43 rss: 74Mb L: 47/63 MS: 1 ChangeBit- 00:06:22.976 [2024-10-17 13:14:30.868301] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.976 [2024-10-17 13:14:30.868332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:22.976 [2024-10-17 13:14:30.868366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.976 [2024-10-17 13:14:30.868384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:22.976 [2024-10-17 13:14:30.868424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.976 [2024-10-17 13:14:30.868441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:22.976 #44 NEW cov: 12496 ft: 14697 corp: 18/867b lim: 105 exec/s: 44 rss: 74Mb L: 82/82 MS: 1 CopyPart- 00:06:22.976 [2024-10-17 13:14:30.928308] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.976 [2024-10-17 13:14:30.928337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:22.976 [2024-10-17 13:14:30.928385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551599 len:256 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.976 [2024-10-17 13:14:30.928408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:22.976 #45 NEW cov: 12496 ft: 14756 corp: 19/914b lim: 105 exec/s: 45 rss: 74Mb L: 47/82 MS: 1 ChangeBit- 00:06:22.976 [2024-10-17 13:14:31.018532] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18386789903670181887 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.976 [2024-10-17 13:14:31.018563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:23.235 #46 NEW cov: 12496 ft: 14766 corp: 20/954b lim: 105 exec/s: 46 rss: 74Mb L: 40/82 MS: 1 ChangeByte- 00:06:23.235 [2024-10-17 13:14:31.068714] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:23.235 [2024-10-17 13:14:31.068744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:23.235 [2024-10-17 13:14:31.068794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:23.235 [2024-10-17 13:14:31.068815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:23.235 [2024-10-17 13:14:31.068851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:23.235 [2024-10-17 13:14:31.068867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:23.235 #47 NEW cov: 12496 ft: 14802 corp: 21/1017b lim: 105 exec/s: 47 rss: 74Mb L: 63/82 MS: 1 ShuffleBytes- 00:06:23.235 [2024-10-17 13:14:31.158845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:23.235 [2024-10-17 13:14:31.158874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:23.235 #48 NEW cov: 12496 ft: 14845 corp: 22/1053b lim: 105 exec/s: 48 rss: 74Mb L: 36/82 MS: 1 EraseBytes- 00:06:23.235 [2024-10-17 13:14:31.219058] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:23.235 [2024-10-17 13:14:31.219087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:23.235 [2024-10-17 13:14:31.219137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:23.235 [2024-10-17 13:14:31.219166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:23.235 #49 NEW cov: 12503 ft: 14873 corp: 23/1095b lim: 105 exec/s: 49 rss: 74Mb L: 42/82 MS: 1 InsertByte- 00:06:23.235 [2024-10-17 13:14:31.279177] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:23.235 [2024-10-17 13:14:31.279207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:23.495 #50 NEW cov: 12503 ft: 14897 corp: 24/1136b lim: 105 exec/s: 50 rss: 74Mb L: 41/82 MS: 1 ChangeByte- 00:06:23.495 [2024-10-17 13:14:31.339361] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:23.495 [2024-10-17 13:14:31.339391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:23.495 #51 NEW cov: 12503 ft: 14923 corp: 25/1174b lim: 105 exec/s: 25 rss: 74Mb L: 38/82 MS: 1 CrossOver- 00:06:23.495 #51 DONE cov: 12503 ft: 14923 corp: 25/1174b lim: 105 exec/s: 25 rss: 74Mb 00:06:23.495 ###### Recommended dictionary. ###### 00:06:23.495 "\001\000" # Uses: 0 00:06:23.495 ###### End of recommended dictionary. ###### 00:06:23.495 Done 51 runs in 2 second(s) 00:06:23.495 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:06:23.495 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:23.495 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:23.495 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:06:23.495 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:06:23.495 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:23.495 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:23.495 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:23.495 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:06:23.495 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:23.495 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:23.495 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:06:23.495 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:06:23.495 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:23.495 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:06:23.495 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:23.495 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:23.495 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:23.495 13:14:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:06:23.495 [2024-10-17 13:14:31.530094] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:23.495 [2024-10-17 13:14:31.530169] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3847392 ] 00:06:23.754 [2024-10-17 13:14:31.708347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.754 [2024-10-17 13:14:31.741025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.754 [2024-10-17 13:14:31.799841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.013 [2024-10-17 13:14:31.816216] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:06:24.013 INFO: Running with entropic power schedule (0xFF, 100). 00:06:24.013 INFO: Seed: 2989514071 00:06:24.013 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:06:24.013 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:06:24.013 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:24.013 INFO: A corpus is not provided, starting from an empty corpus 00:06:24.013 #2 INITED exec/s: 0 rss: 65Mb 00:06:24.013 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:24.013 This may also happen if the target rejected all inputs we tried so far 00:06:24.013 [2024-10-17 13:14:31.892112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.013 [2024-10-17 13:14:31.892149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:24.272 NEW_FUNC[1/716]: 0x455a88 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:06:24.272 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:24.272 #4 NEW cov: 12295 ft: 12296 corp: 2/38b lim: 120 exec/s: 0 rss: 73Mb L: 37/37 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:24.272 [2024-10-17 13:14:32.233453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.272 [2024-10-17 13:14:32.233498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:24.272 #20 NEW cov: 12409 ft: 12947 corp: 3/76b lim: 120 exec/s: 0 rss: 73Mb L: 38/38 MS: 1 InsertByte- 00:06:24.272 [2024-10-17 13:14:32.303744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.272 [2024-10-17 13:14:32.303773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:24.531 #21 NEW cov: 12415 ft: 13270 corp: 4/105b lim: 120 exec/s: 0 rss: 73Mb L: 29/38 MS: 1 EraseBytes- 00:06:24.531 [2024-10-17 13:14:32.354383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4294967040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.532 [2024-10-17 13:14:32.354418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:24.532 [2024-10-17 13:14:32.354564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.532 [2024-10-17 13:14:32.354592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:24.532 [2024-10-17 13:14:32.354730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744069414649855 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.532 [2024-10-17 13:14:32.354760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:24.532 #22 NEW cov: 12500 ft: 14319 corp: 5/190b lim: 120 exec/s: 0 rss: 73Mb L: 85/85 MS: 1 InsertRepeatedBytes- 00:06:24.532 [2024-10-17 13:14:32.423980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.532 [2024-10-17 13:14:32.424018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:24.532 #28 NEW cov: 12500 ft: 14431 corp: 6/234b lim: 120 exec/s: 0 rss: 73Mb L: 44/85 MS: 1 InsertRepeatedBytes- 00:06:24.532 [2024-10-17 13:14:32.474048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.532 [2024-10-17 13:14:32.474077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:24.532 #29 NEW cov: 12500 ft: 14533 corp: 7/269b lim: 120 exec/s: 0 rss: 73Mb L: 35/85 MS: 1 EraseBytes- 00:06:24.532 [2024-10-17 13:14:32.525091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4294967040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.532 [2024-10-17 13:14:32.525129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:24.532 [2024-10-17 13:14:32.525255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.532 [2024-10-17 13:14:32.525282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:24.532 [2024-10-17 13:14:32.525424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744069414649855 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.532 [2024-10-17 13:14:32.525450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:24.532 #30 NEW cov: 12500 ft: 14588 corp: 8/354b lim: 120 exec/s: 0 rss: 73Mb L: 85/85 MS: 1 ShuffleBytes- 00:06:24.791 [2024-10-17 13:14:32.594699] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.791 [2024-10-17 13:14:32.594727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:24.791 #36 NEW cov: 12500 ft: 14610 corp: 9/391b lim: 120 exec/s: 0 rss: 73Mb L: 37/85 MS: 1 ShuffleBytes- 00:06:24.791 [2024-10-17 13:14:32.644984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073675997183 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.791 [2024-10-17 13:14:32.645019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:24.791 #37 NEW cov: 12500 ft: 14663 corp: 10/428b lim: 120 exec/s: 0 rss: 73Mb L: 37/85 MS: 1 ChangeBit- 00:06:24.791 [2024-10-17 13:14:32.696169] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4294967040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.791 [2024-10-17 13:14:32.696209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:24.791 [2024-10-17 13:14:32.696335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.791 [2024-10-17 13:14:32.696360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:24.791 [2024-10-17 13:14:32.696488] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744069414649855 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.791 [2024-10-17 13:14:32.696513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:24.791 [2024-10-17 13:14:32.696646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073694806015 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.791 [2024-10-17 13:14:32.696673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:24.791 #38 NEW cov: 12500 ft: 15080 corp: 11/540b lim: 120 exec/s: 0 rss: 73Mb L: 112/112 MS: 1 InsertRepeatedBytes- 00:06:24.791 [2024-10-17 13:14:32.765669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.791 [2024-10-17 13:14:32.765696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:24.791 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:24.791 #39 NEW cov: 12523 ft: 15145 corp: 12/572b lim: 120 exec/s: 0 rss: 73Mb L: 32/112 MS: 1 CrossOver- 00:06:24.791 [2024-10-17 13:14:32.835878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.791 [2024-10-17 13:14:32.835905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:25.050 #40 NEW cov: 12523 ft: 15172 corp: 13/616b lim: 120 exec/s: 40 rss: 74Mb L: 44/112 MS: 1 CrossOver- 00:06:25.050 [2024-10-17 13:14:32.906989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4294967040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.050 [2024-10-17 13:14:32.907026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:25.050 [2024-10-17 13:14:32.907140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:28673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.050 [2024-10-17 13:14:32.907168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:25.050 [2024-10-17 13:14:32.907299] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744069414649855 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.050 [2024-10-17 13:14:32.907323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:25.050 [2024-10-17 13:14:32.907456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073694806015 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.050 [2024-10-17 13:14:32.907482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:25.050 #46 NEW cov: 12523 ft: 15259 corp: 14/728b lim: 120 exec/s: 46 rss: 74Mb L: 112/112 MS: 1 ChangeBinInt- 00:06:25.050 [2024-10-17 13:14:32.977387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4294967040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.050 [2024-10-17 13:14:32.977427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:25.050 [2024-10-17 13:14:32.977521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.050 [2024-10-17 13:14:32.977547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:25.050 [2024-10-17 13:14:32.977676] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744069414649855 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.050 [2024-10-17 13:14:32.977701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:25.050 [2024-10-17 13:14:32.977832] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073694806015 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.050 [2024-10-17 13:14:32.977857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:25.050 #47 NEW cov: 12523 ft: 15351 corp: 15/846b lim: 120 exec/s: 47 rss: 74Mb L: 118/118 MS: 1 InsertRepeatedBytes- 00:06:25.050 [2024-10-17 13:14:33.026666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073675997183 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.050 [2024-10-17 13:14:33.026692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:25.050 #48 NEW cov: 12523 ft: 15364 corp: 16/883b lim: 120 exec/s: 48 rss: 74Mb L: 37/118 MS: 1 ShuffleBytes- 00:06:25.050 [2024-10-17 13:14:33.096956] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.050 [2024-10-17 13:14:33.096994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:25.309 #49 NEW cov: 12523 ft: 15378 corp: 17/928b lim: 120 exec/s: 49 rss: 74Mb L: 45/118 MS: 1 InsertByte- 00:06:25.309 [2024-10-17 13:14:33.147112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65535 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.309 [2024-10-17 13:14:33.147147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:25.309 #50 NEW cov: 12523 ft: 15398 corp: 18/963b lim: 120 exec/s: 50 rss: 74Mb L: 35/118 MS: 1 ChangeBinInt- 00:06:25.309 [2024-10-17 13:14:33.217373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073675997183 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.309 [2024-10-17 13:14:33.217404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:25.309 #51 NEW cov: 12523 ft: 15419 corp: 19/998b lim: 120 exec/s: 51 rss: 74Mb L: 35/118 MS: 1 EraseBytes- 00:06:25.309 [2024-10-17 13:14:33.287972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.309 [2024-10-17 13:14:33.288006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:25.309 [2024-10-17 13:14:33.288117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.309 [2024-10-17 13:14:33.288143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:25.309 #52 NEW cov: 12523 ft: 15739 corp: 20/1046b lim: 120 exec/s: 52 rss: 74Mb L: 48/118 MS: 1 InsertRepeatedBytes- 00:06:25.309 [2024-10-17 13:14:33.337750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.309 [2024-10-17 13:14:33.337782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:25.309 #62 NEW cov: 12523 ft: 15792 corp: 21/1082b lim: 120 exec/s: 62 rss: 74Mb L: 36/118 MS: 5 CopyPart-ChangeByte-EraseBytes-CopyPart-CrossOver- 00:06:25.569 [2024-10-17 13:14:33.388361] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073675997183 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.569 [2024-10-17 13:14:33.388400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:25.569 [2024-10-17 13:14:33.388521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.569 [2024-10-17 13:14:33.388547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:25.569 #68 NEW cov: 12523 ft: 15907 corp: 22/1138b lim: 120 exec/s: 68 rss: 74Mb L: 56/118 MS: 1 CrossOver- 00:06:25.569 [2024-10-17 13:14:33.448190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.569 [2024-10-17 13:14:33.448227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:25.569 #69 NEW cov: 12523 ft: 15978 corp: 23/1170b lim: 120 exec/s: 69 rss: 74Mb L: 32/118 MS: 1 CMP- DE: "\001\000\001\022"- 00:06:25.569 [2024-10-17 13:14:33.518768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65535 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.569 [2024-10-17 13:14:33.518804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:25.569 [2024-10-17 13:14:33.518961] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18374686479671623679 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.569 [2024-10-17 13:14:33.518984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:25.569 #70 NEW cov: 12523 ft: 15995 corp: 24/1233b lim: 120 exec/s: 70 rss: 74Mb L: 63/118 MS: 1 CopyPart- 00:06:25.569 [2024-10-17 13:14:33.588864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.569 [2024-10-17 13:14:33.588890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:25.569 #71 NEW cov: 12523 ft: 16015 corp: 25/1265b lim: 120 exec/s: 71 rss: 74Mb L: 32/118 MS: 1 ChangeBinInt- 00:06:25.827 [2024-10-17 13:14:33.639400] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.828 [2024-10-17 13:14:33.639437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:25.828 [2024-10-17 13:14:33.639582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709494015 len:10281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.828 [2024-10-17 13:14:33.639609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:25.828 #72 NEW cov: 12523 ft: 16025 corp: 26/1314b lim: 120 exec/s: 72 rss: 74Mb L: 49/118 MS: 1 InsertRepeatedBytes- 00:06:25.828 [2024-10-17 13:14:33.689262] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.828 [2024-10-17 13:14:33.689289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:25.828 #73 NEW cov: 12523 ft: 16092 corp: 27/1350b lim: 120 exec/s: 73 rss: 74Mb L: 36/118 MS: 1 ChangeByte- 00:06:25.828 [2024-10-17 13:14:33.759411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.828 [2024-10-17 13:14:33.759438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:25.828 #74 NEW cov: 12523 ft: 16120 corp: 28/1394b lim: 120 exec/s: 74 rss: 75Mb L: 44/118 MS: 1 ShuffleBytes- 00:06:25.828 [2024-10-17 13:14:33.829665] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073675997183 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.828 [2024-10-17 13:14:33.829691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:25.828 #75 NEW cov: 12523 ft: 16122 corp: 29/1431b lim: 120 exec/s: 75 rss: 75Mb L: 37/118 MS: 1 PersAutoDict- DE: "\001\000\001\022"- 00:06:26.087 [2024-10-17 13:14:33.879902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073675997000 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.087 [2024-10-17 13:14:33.879938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:26.087 #76 NEW cov: 12523 ft: 16135 corp: 30/1468b lim: 120 exec/s: 38 rss: 75Mb L: 37/118 MS: 1 ChangeByte- 00:06:26.087 #76 DONE cov: 12523 ft: 16135 corp: 30/1468b lim: 120 exec/s: 38 rss: 75Mb 00:06:26.087 ###### Recommended dictionary. ###### 00:06:26.087 "\001\000\001\022" # Uses: 1 00:06:26.087 ###### End of recommended dictionary. ###### 00:06:26.087 Done 76 runs in 2 second(s) 00:06:26.087 13:14:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:06:26.087 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:26.087 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:26.087 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:06:26.087 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:06:26.087 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:26.087 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:26.087 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:26.087 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:06:26.087 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:26.087 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:26.087 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:06:26.087 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:06:26.087 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:26.087 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:06:26.087 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:26.087 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:26.087 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:26.087 13:14:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:06:26.087 [2024-10-17 13:14:34.049902] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:26.087 [2024-10-17 13:14:34.049976] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3847692 ] 00:06:26.346 [2024-10-17 13:14:34.229444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.346 [2024-10-17 13:14:34.262881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.346 [2024-10-17 13:14:34.321861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.346 [2024-10-17 13:14:34.338216] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:06:26.346 INFO: Running with entropic power schedule (0xFF, 100). 00:06:26.346 INFO: Seed: 1217550365 00:06:26.346 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:06:26.346 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:06:26.346 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:26.346 INFO: A corpus is not provided, starting from an empty corpus 00:06:26.346 #2 INITED exec/s: 0 rss: 65Mb 00:06:26.346 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:26.346 This may also happen if the target rejected all inputs we tried so far 00:06:26.346 [2024-10-17 13:14:34.383375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:26.346 [2024-10-17 13:14:34.383404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:26.864 NEW_FUNC[1/714]: 0x459378 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:06:26.864 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:26.864 #4 NEW cov: 12239 ft: 12215 corp: 2/31b lim: 100 exec/s: 0 rss: 73Mb L: 30/30 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:26.864 [2024-10-17 13:14:34.714313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:26.864 [2024-10-17 13:14:34.714355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:26.864 #5 NEW cov: 12352 ft: 12821 corp: 3/62b lim: 100 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 CrossOver- 00:06:26.864 [2024-10-17 13:14:34.754511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:26.864 [2024-10-17 13:14:34.754538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:26.864 [2024-10-17 13:14:34.754585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:26.864 [2024-10-17 13:14:34.754600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:26.864 [2024-10-17 13:14:34.754653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:26.864 [2024-10-17 13:14:34.754669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:26.864 #11 NEW cov: 12358 ft: 13460 corp: 4/122b lim: 100 exec/s: 0 rss: 73Mb L: 60/60 MS: 1 CrossOver- 00:06:26.864 [2024-10-17 13:14:34.814673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:26.864 [2024-10-17 13:14:34.814699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:26.864 [2024-10-17 13:14:34.814745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:26.864 [2024-10-17 13:14:34.814760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:26.864 [2024-10-17 13:14:34.814813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:26.864 [2024-10-17 13:14:34.814828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:26.864 #12 NEW cov: 12443 ft: 13690 corp: 5/182b lim: 100 exec/s: 0 rss: 73Mb L: 60/60 MS: 1 ShuffleBytes- 00:06:26.864 [2024-10-17 13:14:34.874563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:26.864 [2024-10-17 13:14:34.874589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:26.864 #13 NEW cov: 12443 ft: 13810 corp: 6/213b lim: 100 exec/s: 0 rss: 73Mb L: 31/60 MS: 1 ChangeByte- 00:06:27.123 [2024-10-17 13:14:34.934740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.123 [2024-10-17 13:14:34.934768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.123 #14 NEW cov: 12443 ft: 13959 corp: 7/244b lim: 100 exec/s: 0 rss: 73Mb L: 31/60 MS: 1 ChangeBit- 00:06:27.123 [2024-10-17 13:14:34.974887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.123 [2024-10-17 13:14:34.974914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.123 #15 NEW cov: 12443 ft: 14037 corp: 8/276b lim: 100 exec/s: 0 rss: 73Mb L: 32/60 MS: 1 InsertByte- 00:06:27.123 [2024-10-17 13:14:35.015190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.123 [2024-10-17 13:14:35.015216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.123 [2024-10-17 13:14:35.015267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:27.123 [2024-10-17 13:14:35.015283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:27.123 [2024-10-17 13:14:35.015338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:27.123 [2024-10-17 13:14:35.015353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:27.123 #21 NEW cov: 12443 ft: 14066 corp: 9/337b lim: 100 exec/s: 0 rss: 73Mb L: 61/61 MS: 1 InsertByte- 00:06:27.123 [2024-10-17 13:14:35.075156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.123 [2024-10-17 13:14:35.075183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.123 #22 NEW cov: 12443 ft: 14158 corp: 10/369b lim: 100 exec/s: 0 rss: 73Mb L: 32/61 MS: 1 InsertByte- 00:06:27.123 [2024-10-17 13:14:35.135408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.123 [2024-10-17 13:14:35.135434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.123 [2024-10-17 13:14:35.135481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:27.123 [2024-10-17 13:14:35.135496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:27.383 #23 NEW cov: 12443 ft: 14434 corp: 11/411b lim: 100 exec/s: 0 rss: 73Mb L: 42/61 MS: 1 InsertRepeatedBytes- 00:06:27.383 [2024-10-17 13:14:35.195455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.383 [2024-10-17 13:14:35.195482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.383 #24 NEW cov: 12443 ft: 14449 corp: 12/439b lim: 100 exec/s: 0 rss: 73Mb L: 28/61 MS: 1 EraseBytes- 00:06:27.383 [2024-10-17 13:14:35.235570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.383 [2024-10-17 13:14:35.235597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.383 #25 NEW cov: 12443 ft: 14498 corp: 13/469b lim: 100 exec/s: 0 rss: 73Mb L: 30/61 MS: 1 ShuffleBytes- 00:06:27.383 [2024-10-17 13:14:35.275891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.383 [2024-10-17 13:14:35.275917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.383 [2024-10-17 13:14:35.275964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:27.383 [2024-10-17 13:14:35.275979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:27.383 [2024-10-17 13:14:35.276032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:27.383 [2024-10-17 13:14:35.276047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:27.383 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:27.383 #26 NEW cov: 12466 ft: 14537 corp: 14/530b lim: 100 exec/s: 0 rss: 74Mb L: 61/61 MS: 1 InsertByte- 00:06:27.383 [2024-10-17 13:14:35.315789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.383 [2024-10-17 13:14:35.315815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.383 #27 NEW cov: 12466 ft: 14549 corp: 15/565b lim: 100 exec/s: 0 rss: 74Mb L: 35/61 MS: 1 CMP- DE: "\000\000\000\000"- 00:06:27.383 [2024-10-17 13:14:35.375978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.383 [2024-10-17 13:14:35.376004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.383 #28 NEW cov: 12466 ft: 14614 corp: 16/597b lim: 100 exec/s: 28 rss: 74Mb L: 32/61 MS: 1 ChangeBinInt- 00:06:27.383 [2024-10-17 13:14:35.416209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.383 [2024-10-17 13:14:35.416237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.383 [2024-10-17 13:14:35.416281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:27.383 [2024-10-17 13:14:35.416297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:27.642 #37 NEW cov: 12466 ft: 14652 corp: 17/647b lim: 100 exec/s: 37 rss: 74Mb L: 50/61 MS: 4 ShuffleBytes-ChangeBit-ShuffleBytes-CrossOver- 00:06:27.642 [2024-10-17 13:14:35.456200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.642 [2024-10-17 13:14:35.456226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.642 #38 NEW cov: 12466 ft: 14669 corp: 18/672b lim: 100 exec/s: 38 rss: 74Mb L: 25/61 MS: 1 EraseBytes- 00:06:27.642 [2024-10-17 13:14:35.496302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.642 [2024-10-17 13:14:35.496328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.642 #39 NEW cov: 12466 ft: 14686 corp: 19/702b lim: 100 exec/s: 39 rss: 74Mb L: 30/61 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:06:27.642 [2024-10-17 13:14:35.556468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.642 [2024-10-17 13:14:35.556495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.642 #45 NEW cov: 12466 ft: 14696 corp: 20/734b lim: 100 exec/s: 45 rss: 74Mb L: 32/61 MS: 1 EraseBytes- 00:06:27.642 [2024-10-17 13:14:35.596608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.642 [2024-10-17 13:14:35.596636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.642 #46 NEW cov: 12466 ft: 14760 corp: 21/765b lim: 100 exec/s: 46 rss: 74Mb L: 31/61 MS: 1 InsertByte- 00:06:27.642 [2024-10-17 13:14:35.657087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.642 [2024-10-17 13:14:35.657113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.642 [2024-10-17 13:14:35.657169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:27.642 [2024-10-17 13:14:35.657184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:27.642 [2024-10-17 13:14:35.657235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:27.642 [2024-10-17 13:14:35.657250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:27.642 [2024-10-17 13:14:35.657303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:27.642 [2024-10-17 13:14:35.657319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:27.642 #47 NEW cov: 12466 ft: 15023 corp: 22/854b lim: 100 exec/s: 47 rss: 74Mb L: 89/89 MS: 1 InsertRepeatedBytes- 00:06:27.901 [2024-10-17 13:14:35.696880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.901 [2024-10-17 13:14:35.696908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.901 #48 NEW cov: 12466 ft: 15091 corp: 23/889b lim: 100 exec/s: 48 rss: 74Mb L: 35/89 MS: 1 CopyPart- 00:06:27.901 [2024-10-17 13:14:35.757046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.901 [2024-10-17 13:14:35.757075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.901 #49 NEW cov: 12466 ft: 15124 corp: 24/921b lim: 100 exec/s: 49 rss: 74Mb L: 32/89 MS: 1 InsertByte- 00:06:27.901 [2024-10-17 13:14:35.797164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.901 [2024-10-17 13:14:35.797189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.901 #50 NEW cov: 12466 ft: 15137 corp: 25/953b lim: 100 exec/s: 50 rss: 74Mb L: 32/89 MS: 1 ShuffleBytes- 00:06:27.901 [2024-10-17 13:14:35.837293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.901 [2024-10-17 13:14:35.837320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.901 #51 NEW cov: 12466 ft: 15170 corp: 26/985b lim: 100 exec/s: 51 rss: 74Mb L: 32/89 MS: 1 ChangeBinInt- 00:06:27.901 [2024-10-17 13:14:35.897658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.901 [2024-10-17 13:14:35.897685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:27.901 [2024-10-17 13:14:35.897733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:27.901 [2024-10-17 13:14:35.897748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:27.901 [2024-10-17 13:14:35.897802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:27.901 [2024-10-17 13:14:35.897817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:27.901 #52 NEW cov: 12466 ft: 15239 corp: 27/1057b lim: 100 exec/s: 52 rss: 74Mb L: 72/89 MS: 1 CrossOver- 00:06:27.901 [2024-10-17 13:14:35.937568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:27.901 [2024-10-17 13:14:35.937601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:28.160 #58 NEW cov: 12466 ft: 15245 corp: 28/1089b lim: 100 exec/s: 58 rss: 74Mb L: 32/89 MS: 1 ChangeBinInt- 00:06:28.160 [2024-10-17 13:14:35.997807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:28.160 [2024-10-17 13:14:35.997833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:28.160 [2024-10-17 13:14:35.997870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:28.160 [2024-10-17 13:14:35.997885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:28.160 #59 NEW cov: 12466 ft: 15258 corp: 29/1144b lim: 100 exec/s: 59 rss: 74Mb L: 55/89 MS: 1 CopyPart- 00:06:28.160 [2024-10-17 13:14:36.037842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:28.160 [2024-10-17 13:14:36.037868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:28.160 #60 NEW cov: 12466 ft: 15265 corp: 30/1179b lim: 100 exec/s: 60 rss: 74Mb L: 35/89 MS: 1 CopyPart- 00:06:28.160 [2024-10-17 13:14:36.077939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:28.160 [2024-10-17 13:14:36.077966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:28.160 #61 NEW cov: 12466 ft: 15346 corp: 31/1211b lim: 100 exec/s: 61 rss: 74Mb L: 32/89 MS: 1 CopyPart- 00:06:28.160 [2024-10-17 13:14:36.138357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:28.160 [2024-10-17 13:14:36.138383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:28.160 [2024-10-17 13:14:36.138433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:28.160 [2024-10-17 13:14:36.138448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:28.160 [2024-10-17 13:14:36.138501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:28.160 [2024-10-17 13:14:36.138517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:28.160 #62 NEW cov: 12466 ft: 15361 corp: 32/1271b lim: 100 exec/s: 62 rss: 74Mb L: 60/89 MS: 1 ChangeByte- 00:06:28.160 [2024-10-17 13:14:36.178266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:28.160 [2024-10-17 13:14:36.178292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:28.420 #63 NEW cov: 12466 ft: 15370 corp: 33/1306b lim: 100 exec/s: 63 rss: 75Mb L: 35/89 MS: 1 ShuffleBytes- 00:06:28.420 [2024-10-17 13:14:36.238759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:28.420 [2024-10-17 13:14:36.238786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:28.420 [2024-10-17 13:14:36.238837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:28.420 [2024-10-17 13:14:36.238852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:28.420 [2024-10-17 13:14:36.238903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:28.420 [2024-10-17 13:14:36.238918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:28.420 [2024-10-17 13:14:36.238970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:28.420 [2024-10-17 13:14:36.238989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:28.420 #64 NEW cov: 12466 ft: 15384 corp: 34/1387b lim: 100 exec/s: 64 rss: 75Mb L: 81/89 MS: 1 InsertRepeatedBytes- 00:06:28.420 [2024-10-17 13:14:36.278498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:28.420 [2024-10-17 13:14:36.278524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:28.420 #65 NEW cov: 12466 ft: 15416 corp: 35/1422b lim: 100 exec/s: 65 rss: 75Mb L: 35/89 MS: 1 CopyPart- 00:06:28.420 [2024-10-17 13:14:36.318774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:28.420 [2024-10-17 13:14:36.318800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:28.420 [2024-10-17 13:14:36.318847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:28.420 [2024-10-17 13:14:36.318861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:28.420 [2024-10-17 13:14:36.318915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:28.420 [2024-10-17 13:14:36.318930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:28.420 #66 NEW cov: 12466 ft: 15435 corp: 36/1483b lim: 100 exec/s: 66 rss: 75Mb L: 61/89 MS: 1 InsertRepeatedBytes- 00:06:28.420 [2024-10-17 13:14:36.358693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:28.420 [2024-10-17 13:14:36.358720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:28.420 #67 NEW cov: 12466 ft: 15525 corp: 37/1515b lim: 100 exec/s: 33 rss: 75Mb L: 32/89 MS: 1 ChangeByte- 00:06:28.420 #67 DONE cov: 12466 ft: 15525 corp: 37/1515b lim: 100 exec/s: 33 rss: 75Mb 00:06:28.420 ###### Recommended dictionary. ###### 00:06:28.420 "\000\000\000\000" # Uses: 1 00:06:28.420 ###### End of recommended dictionary. ###### 00:06:28.420 Done 67 runs in 2 second(s) 00:06:28.679 13:14:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:06:28.679 13:14:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:28.679 13:14:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:28.679 13:14:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:06:28.679 13:14:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:06:28.679 13:14:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:28.679 13:14:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:28.679 13:14:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:28.679 13:14:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:06:28.679 13:14:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:28.679 13:14:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:28.679 13:14:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:06:28.679 13:14:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:06:28.679 13:14:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:28.679 13:14:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:06:28.679 13:14:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:28.679 13:14:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:28.679 13:14:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:28.679 13:14:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:06:28.679 [2024-10-17 13:14:36.548723] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:28.679 [2024-10-17 13:14:36.548794] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848212 ] 00:06:28.679 [2024-10-17 13:14:36.724161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.938 [2024-10-17 13:14:36.758950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.938 [2024-10-17 13:14:36.817818] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.938 [2024-10-17 13:14:36.834189] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:06:28.938 INFO: Running with entropic power schedule (0xFF, 100). 00:06:28.938 INFO: Seed: 3713563466 00:06:28.938 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:06:28.938 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:06:28.938 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:28.938 INFO: A corpus is not provided, starting from an empty corpus 00:06:28.938 #2 INITED exec/s: 0 rss: 65Mb 00:06:28.938 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:28.938 This may also happen if the target rejected all inputs we tried so far 00:06:28.938 [2024-10-17 13:14:36.879385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13599952494648933564 len:48317 00:06:28.938 [2024-10-17 13:14:36.879415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:28.938 [2024-10-17 13:14:36.879464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 00:06:28.938 [2024-10-17 13:14:36.879481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:29.197 NEW_FUNC[1/714]: 0x45c338 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:06:29.197 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:29.197 #5 NEW cov: 12213 ft: 12211 corp: 2/30b lim: 50 exec/s: 0 rss: 73Mb L: 29/29 MS: 3 CrossOver-ChangeBinInt-InsertRepeatedBytes- 00:06:29.197 [2024-10-17 13:14:37.210453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13599952494648933564 len:48317 00:06:29.197 [2024-10-17 13:14:37.210494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:29.197 [2024-10-17 13:14:37.210558] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 00:06:29.197 [2024-10-17 13:14:37.210580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:29.197 [2024-10-17 13:14:37.210642] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13599927204790975676 len:48139 00:06:29.197 [2024-10-17 13:14:37.210662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:29.456 #6 NEW cov: 12330 ft: 13062 corp: 3/60b lim: 50 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 InsertByte- 00:06:29.456 [2024-10-17 13:14:37.270513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13567167836752493756 len:17341 00:06:29.456 [2024-10-17 13:14:37.270542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:29.456 [2024-10-17 13:14:37.270577] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 00:06:29.456 [2024-10-17 13:14:37.270594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:29.456 [2024-10-17 13:14:37.270646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13599927204790975676 len:48139 00:06:29.456 [2024-10-17 13:14:37.270663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:29.456 #7 NEW cov: 12336 ft: 13324 corp: 4/90b lim: 50 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 ChangeBinInt- 00:06:29.456 [2024-10-17 13:14:37.330691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 00:06:29.456 [2024-10-17 13:14:37.330721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:29.456 [2024-10-17 13:14:37.330754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 00:06:29.456 [2024-10-17 13:14:37.330770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:29.456 [2024-10-17 13:14:37.330822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11791448172606497699 len:41892 00:06:29.456 [2024-10-17 13:14:37.330838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:29.456 #9 NEW cov: 12421 ft: 13604 corp: 5/129b lim: 50 exec/s: 0 rss: 73Mb L: 39/39 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:29.456 [2024-10-17 13:14:37.370780] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13599952494648933564 len:48317 00:06:29.456 [2024-10-17 13:14:37.370809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:29.456 [2024-10-17 13:14:37.370846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 00:06:29.456 [2024-10-17 13:14:37.370862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:29.456 [2024-10-17 13:14:37.370915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13599926594905619644 len:48139 00:06:29.456 [2024-10-17 13:14:37.370931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:29.456 #10 NEW cov: 12421 ft: 13770 corp: 6/159b lim: 50 exec/s: 0 rss: 73Mb L: 30/39 MS: 1 ChangeByte- 00:06:29.456 [2024-10-17 13:14:37.411010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:9187201950435737471 len:32640 00:06:29.456 [2024-10-17 13:14:37.411038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:29.456 [2024-10-17 13:14:37.411087] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9187201950435737471 len:32640 00:06:29.456 [2024-10-17 13:14:37.411102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:29.456 [2024-10-17 13:14:37.411158] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9187201950435737471 len:32640 00:06:29.456 [2024-10-17 13:14:37.411174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:29.456 [2024-10-17 13:14:37.411229] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:9187201950435737471 len:32640 00:06:29.456 [2024-10-17 13:14:37.411245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:29.456 #13 NEW cov: 12421 ft: 14089 corp: 7/203b lim: 50 exec/s: 0 rss: 73Mb L: 44/44 MS: 3 CopyPart-ChangeByte-InsertRepeatedBytes- 00:06:29.456 [2024-10-17 13:14:37.451011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 00:06:29.456 [2024-10-17 13:14:37.451038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:29.456 [2024-10-17 13:14:37.451074] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 00:06:29.456 [2024-10-17 13:14:37.451090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:29.456 [2024-10-17 13:14:37.451144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446642521953248163 len:41892 00:06:29.456 [2024-10-17 13:14:37.451166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:29.456 #14 NEW cov: 12421 ft: 14145 corp: 8/242b lim: 50 exec/s: 0 rss: 73Mb L: 39/44 MS: 1 CMP- DE: "\377\377"- 00:06:29.715 [2024-10-17 13:14:37.511212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13571671436379864252 len:17341 00:06:29.715 [2024-10-17 13:14:37.511241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:29.715 [2024-10-17 13:14:37.511275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 00:06:29.715 [2024-10-17 13:14:37.511292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:29.715 [2024-10-17 13:14:37.511345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13599927204790975676 len:48139 00:06:29.715 [2024-10-17 13:14:37.511362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:29.715 #15 NEW cov: 12421 ft: 14162 corp: 9/272b lim: 50 exec/s: 0 rss: 73Mb L: 30/44 MS: 1 ChangeBit- 00:06:29.715 [2024-10-17 13:14:37.571493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 00:06:29.715 [2024-10-17 13:14:37.571521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:29.715 [2024-10-17 13:14:37.571569] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 00:06:29.715 [2024-10-17 13:14:37.571585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:29.715 [2024-10-17 13:14:37.571636] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:702825037824 len:41984 00:06:29.715 [2024-10-17 13:14:37.571652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:29.715 [2024-10-17 13:14:37.571705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11791448174150001571 len:41892 00:06:29.715 [2024-10-17 13:14:37.571721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:29.715 #16 NEW cov: 12421 ft: 14190 corp: 10/316b lim: 50 exec/s: 0 rss: 73Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:06:29.715 [2024-10-17 13:14:37.631662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13600023962904739004 len:48317 00:06:29.715 [2024-10-17 13:14:37.631690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:29.715 [2024-10-17 13:14:37.631741] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 00:06:29.715 [2024-10-17 13:14:37.631757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:29.715 [2024-10-17 13:14:37.631808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13599952493558414524 len:48317 00:06:29.715 [2024-10-17 13:14:37.631824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:29.715 [2024-10-17 13:14:37.631876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:13599952493558414524 len:48317 00:06:29.715 [2024-10-17 13:14:37.631892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:29.715 #17 NEW cov: 12421 ft: 14305 corp: 11/362b lim: 50 exec/s: 0 rss: 73Mb L: 46/46 MS: 1 CopyPart- 00:06:29.715 [2024-10-17 13:14:37.691720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 00:06:29.715 [2024-10-17 13:14:37.691747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:29.715 [2024-10-17 13:14:37.691782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 00:06:29.715 [2024-10-17 13:14:37.691798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:29.715 [2024-10-17 13:14:37.691851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446642521953248163 len:41892 00:06:29.715 [2024-10-17 13:14:37.691867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:29.715 #18 NEW cov: 12421 ft: 14397 corp: 12/401b lim: 50 exec/s: 0 rss: 73Mb L: 39/46 MS: 1 ShuffleBytes- 00:06:29.715 [2024-10-17 13:14:37.731801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13599952494648933564 len:48317 00:06:29.715 [2024-10-17 13:14:37.731828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:29.715 [2024-10-17 13:14:37.731862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13599952493558480060 len:48317 00:06:29.716 [2024-10-17 13:14:37.731878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:29.716 [2024-10-17 13:14:37.731930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13599927204790975676 len:48139 00:06:29.716 [2024-10-17 13:14:37.731946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:29.716 #19 NEW cov: 12421 ft: 14457 corp: 13/431b lim: 50 exec/s: 0 rss: 73Mb L: 30/46 MS: 1 ChangeBit- 00:06:29.975 [2024-10-17 13:14:37.771933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5672209775593307324 len:48196 00:06:29.975 [2024-10-17 13:14:37.771961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:29.975 [2024-10-17 13:14:37.772005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13599952491520441532 len:48317 00:06:29.975 [2024-10-17 13:14:37.772025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:29.975 [2024-10-17 13:14:37.772078] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13599927204790975676 len:48139 00:06:29.975 [2024-10-17 13:14:37.772111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:29.975 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:29.975 #20 NEW cov: 12444 ft: 14543 corp: 14/461b lim: 50 exec/s: 0 rss: 74Mb L: 30/46 MS: 1 ChangeBinInt- 00:06:29.975 [2024-10-17 13:14:37.812149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13600023962904739004 len:48317 00:06:29.975 [2024-10-17 13:14:37.812180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:29.975 [2024-10-17 13:14:37.812234] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14104355651823910076 len:48317 00:06:29.975 [2024-10-17 13:14:37.812250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:29.975 [2024-10-17 13:14:37.812302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13599952493558414524 len:48317 00:06:29.975 [2024-10-17 13:14:37.812318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:29.975 [2024-10-17 13:14:37.812373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:13599952493558414524 len:48317 00:06:29.975 [2024-10-17 13:14:37.812389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:29.975 #21 NEW cov: 12444 ft: 14565 corp: 15/507b lim: 50 exec/s: 0 rss: 74Mb L: 46/46 MS: 1 ChangeBinInt- 00:06:29.975 [2024-10-17 13:14:37.872232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13567167836752493756 len:17341 00:06:29.975 [2024-10-17 13:14:37.872260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:29.975 [2024-10-17 13:14:37.872303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 00:06:29.975 [2024-10-17 13:14:37.872320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:29.975 [2024-10-17 13:14:37.872376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2286910229603728384 len:42429 00:06:29.975 [2024-10-17 13:14:37.872393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:29.975 #22 NEW cov: 12444 ft: 14573 corp: 16/539b lim: 50 exec/s: 22 rss: 74Mb L: 32/46 MS: 1 CMP- DE: "\000\037"- 00:06:29.975 [2024-10-17 13:14:37.912288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18427690327436213503 len:48317 00:06:29.975 [2024-10-17 13:14:37.912317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:29.975 [2024-10-17 13:14:37.912357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13599952493550453571 len:48317 00:06:29.975 [2024-10-17 13:14:37.912373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:29.975 [2024-10-17 13:14:37.912425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13599952493558414524 len:42429 00:06:29.975 [2024-10-17 13:14:37.912440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:29.975 #23 NEW cov: 12444 ft: 14593 corp: 17/571b lim: 50 exec/s: 23 rss: 74Mb L: 32/46 MS: 1 PersAutoDict- DE: "\377\377"- 00:06:29.975 [2024-10-17 13:14:37.972261] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13599952494648933564 len:48317 00:06:29.975 [2024-10-17 13:14:37.972288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:29.975 #24 NEW cov: 12444 ft: 14950 corp: 18/587b lim: 50 exec/s: 24 rss: 74Mb L: 16/46 MS: 1 EraseBytes- 00:06:29.975 [2024-10-17 13:14:38.012713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13600023962904739004 len:48317 00:06:29.975 [2024-10-17 13:14:38.012741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:29.975 [2024-10-17 13:14:38.012790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 00:06:29.975 [2024-10-17 13:14:38.012806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:29.975 [2024-10-17 13:14:38.012858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13599952493558414524 len:1 00:06:29.975 [2024-10-17 13:14:38.012873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:29.975 [2024-10-17 13:14:38.012926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:13599952490404297916 len:48317 00:06:29.975 [2024-10-17 13:14:38.012942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:30.235 #25 NEW cov: 12444 ft: 14970 corp: 19/636b lim: 50 exec/s: 25 rss: 74Mb L: 49/49 MS: 1 InsertRepeatedBytes- 00:06:30.235 [2024-10-17 13:14:38.052684] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13599952494648933564 len:48317 00:06:30.235 [2024-10-17 13:14:38.052711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:30.235 [2024-10-17 13:14:38.052753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 00:06:30.235 [2024-10-17 13:14:38.052769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:30.235 [2024-10-17 13:14:38.052824] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13599952493558431743 len:48317 00:06:30.235 [2024-10-17 13:14:38.052855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:30.235 #31 NEW cov: 12444 ft: 14984 corp: 20/667b lim: 50 exec/s: 31 rss: 74Mb L: 31/49 MS: 1 PersAutoDict- DE: "\377\377"- 00:06:30.235 [2024-10-17 13:14:38.092713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13599952494648933564 len:48317 00:06:30.235 [2024-10-17 13:14:38.092740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:30.235 [2024-10-17 13:14:38.092779] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 00:06:30.235 [2024-10-17 13:14:38.092795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:30.235 #32 NEW cov: 12444 ft: 15060 corp: 21/696b lim: 50 exec/s: 32 rss: 74Mb L: 29/49 MS: 1 EraseBytes- 00:06:30.235 [2024-10-17 13:14:38.132947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13599952494648933564 len:48317 00:06:30.235 [2024-10-17 13:14:38.132975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:30.235 [2024-10-17 13:14:38.133017] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 00:06:30.235 [2024-10-17 13:14:38.133032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:30.235 [2024-10-17 13:14:38.133085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13599952493558431743 len:48317 00:06:30.235 [2024-10-17 13:14:38.133101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:30.235 #33 NEW cov: 12444 ft: 15066 corp: 22/727b lim: 50 exec/s: 33 rss: 74Mb L: 31/49 MS: 1 CopyPart- 00:06:30.235 [2024-10-17 13:14:38.193134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13567167836752493756 len:17341 00:06:30.235 [2024-10-17 13:14:38.193168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:30.235 [2024-10-17 13:14:38.193208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 00:06:30.235 [2024-10-17 13:14:38.193224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:30.235 [2024-10-17 13:14:38.193277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13599927204790975676 len:48317 00:06:30.236 [2024-10-17 13:14:38.193294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:30.236 #34 NEW cov: 12444 ft: 15077 corp: 23/760b lim: 50 exec/s: 34 rss: 74Mb L: 33/49 MS: 1 CrossOver- 00:06:30.236 [2024-10-17 13:14:38.233358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 00:06:30.236 [2024-10-17 13:14:38.233385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:30.236 [2024-10-17 13:14:38.233434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 00:06:30.236 [2024-10-17 13:14:38.233450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:30.236 [2024-10-17 13:14:38.233502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:702825037824 len:41984 00:06:30.236 [2024-10-17 13:14:38.233518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:30.236 [2024-10-17 13:14:38.233571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11791448174150001571 len:41892 00:06:30.236 [2024-10-17 13:14:38.233587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:30.236 #35 NEW cov: 12444 ft: 15090 corp: 24/804b lim: 50 exec/s: 35 rss: 74Mb L: 44/49 MS: 1 CopyPart- 00:06:30.495 [2024-10-17 13:14:38.293379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 00:06:30.495 [2024-10-17 13:14:38.293406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:30.495 [2024-10-17 13:14:38.293453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 00:06:30.495 [2024-10-17 13:14:38.293470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:30.495 [2024-10-17 13:14:38.293523] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446642521953248163 len:41892 00:06:30.495 [2024-10-17 13:14:38.293543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:30.495 #36 NEW cov: 12444 ft: 15096 corp: 25/843b lim: 50 exec/s: 36 rss: 74Mb L: 39/49 MS: 1 ShuffleBytes- 00:06:30.495 [2024-10-17 13:14:38.333488] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 00:06:30.495 [2024-10-17 13:14:38.333516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:30.495 [2024-10-17 13:14:38.333562] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 00:06:30.495 [2024-10-17 13:14:38.333577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:30.495 [2024-10-17 13:14:38.333631] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446642521953248163 len:41892 00:06:30.495 [2024-10-17 13:14:38.333663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:30.495 #37 NEW cov: 12444 ft: 15170 corp: 26/882b lim: 50 exec/s: 37 rss: 74Mb L: 39/49 MS: 1 ShuffleBytes- 00:06:30.495 [2024-10-17 13:14:38.393796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 00:06:30.495 [2024-10-17 13:14:38.393824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:30.495 [2024-10-17 13:14:38.393872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 00:06:30.495 [2024-10-17 13:14:38.393888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:30.495 [2024-10-17 13:14:38.393941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446642521953248163 len:41892 00:06:30.495 [2024-10-17 13:14:38.393956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:30.495 [2024-10-17 13:14:38.394009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:288230378897122048 len:1 00:06:30.495 [2024-10-17 13:14:38.394025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:30.495 #38 NEW cov: 12444 ft: 15180 corp: 27/929b lim: 50 exec/s: 38 rss: 74Mb L: 47/49 MS: 1 CMP- DE: "\000\004\000\000\000\000\000\000"- 00:06:30.495 [2024-10-17 13:14:38.453948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 00:06:30.495 [2024-10-17 13:14:38.453975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:30.495 [2024-10-17 13:14:38.454027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 00:06:30.495 [2024-10-17 13:14:38.454044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:30.495 [2024-10-17 13:14:38.454098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11817445025533600419 len:41892 00:06:30.495 [2024-10-17 13:14:38.454114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:30.495 [2024-10-17 13:14:38.454169] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11791448172606497699 len:41867 00:06:30.495 [2024-10-17 13:14:38.454186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:30.495 #39 NEW cov: 12444 ft: 15241 corp: 28/969b lim: 50 exec/s: 39 rss: 74Mb L: 40/49 MS: 1 InsertByte- 00:06:30.495 [2024-10-17 13:14:38.493911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13599952494648933564 len:48317 00:06:30.495 [2024-10-17 13:14:38.493937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:30.495 [2024-10-17 13:14:38.493973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 00:06:30.495 [2024-10-17 13:14:38.493989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:30.495 [2024-10-17 13:14:38.494043] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13599952493558431742 len:48317 00:06:30.495 [2024-10-17 13:14:38.494060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:30.495 #40 NEW cov: 12444 ft: 15272 corp: 29/1000b lim: 50 exec/s: 40 rss: 74Mb L: 31/49 MS: 1 ChangeByte- 00:06:30.754 [2024-10-17 13:14:38.554282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13571671436379864252 len:17341 00:06:30.754 [2024-10-17 13:14:38.554310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:30.754 [2024-10-17 13:14:38.554358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 00:06:30.754 [2024-10-17 13:14:38.554374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:30.754 [2024-10-17 13:14:38.554426] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13599926839718755516 len:26472 00:06:30.754 [2024-10-17 13:14:38.554442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:30.754 [2024-10-17 13:14:38.554495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:7451037802321897319 len:26472 00:06:30.754 [2024-10-17 13:14:38.554510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:30.754 #41 NEW cov: 12444 ft: 15291 corp: 30/1047b lim: 50 exec/s: 41 rss: 74Mb L: 47/49 MS: 1 InsertRepeatedBytes- 00:06:30.754 [2024-10-17 13:14:38.614426] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 00:06:30.754 [2024-10-17 13:14:38.614455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:30.754 [2024-10-17 13:14:38.614503] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 00:06:30.754 [2024-10-17 13:14:38.614519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:30.754 [2024-10-17 13:14:38.614570] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11817445025533600419 len:41892 00:06:30.754 [2024-10-17 13:14:38.614586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:30.754 [2024-10-17 13:14:38.614657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11791448172606497699 len:41892 00:06:30.754 [2024-10-17 13:14:38.614673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:30.754 #42 NEW cov: 12444 ft: 15321 corp: 31/1094b lim: 50 exec/s: 42 rss: 75Mb L: 47/49 MS: 1 CopyPart- 00:06:30.754 [2024-10-17 13:14:38.674602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 00:06:30.754 [2024-10-17 13:14:38.674634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:30.754 [2024-10-17 13:14:38.674672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 00:06:30.754 [2024-10-17 13:14:38.674687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:30.754 [2024-10-17 13:14:38.674743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:702825037824 len:41984 00:06:30.754 [2024-10-17 13:14:38.674759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:30.754 [2024-10-17 13:14:38.674811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11791448174150001571 len:41892 00:06:30.754 [2024-10-17 13:14:38.674828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:30.754 #43 NEW cov: 12444 ft: 15331 corp: 32/1140b lim: 50 exec/s: 43 rss: 75Mb L: 46/49 MS: 1 PersAutoDict- DE: "\000\037"- 00:06:30.754 [2024-10-17 13:14:38.734434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13599952494648933564 len:48317 00:06:30.754 [2024-10-17 13:14:38.734463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:30.754 #44 NEW cov: 12444 ft: 15348 corp: 33/1156b lim: 50 exec/s: 44 rss: 75Mb L: 16/49 MS: 1 PersAutoDict- DE: "\000\037"- 00:06:30.755 [2024-10-17 13:14:38.795002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13600023962904739004 len:48317 00:06:30.755 [2024-10-17 13:14:38.795030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:30.755 [2024-10-17 13:14:38.795081] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 00:06:30.755 [2024-10-17 13:14:38.795098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:30.755 [2024-10-17 13:14:38.795156] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13599952493558414524 len:1 00:06:30.755 [2024-10-17 13:14:38.795173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:30.755 [2024-10-17 13:14:38.795227] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:13599952490404297916 len:64701 00:06:30.755 [2024-10-17 13:14:38.795243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:31.014 #45 NEW cov: 12444 ft: 15351 corp: 34/1205b lim: 50 exec/s: 45 rss: 75Mb L: 49/49 MS: 1 ChangeBit- 00:06:31.014 [2024-10-17 13:14:38.855081] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11791448172606497699 len:41892 00:06:31.014 [2024-10-17 13:14:38.855109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:31.014 [2024-10-17 13:14:38.855163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11791448172606497699 len:41892 00:06:31.014 [2024-10-17 13:14:38.855180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:31.014 [2024-10-17 13:14:38.855231] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11791549721169077026 len:41892 00:06:31.014 [2024-10-17 13:14:38.855247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:31.014 [2024-10-17 13:14:38.855302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11791448172606497699 len:41892 00:06:31.014 [2024-10-17 13:14:38.855318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:31.014 #46 NEW cov: 12444 ft: 15355 corp: 35/1246b lim: 50 exec/s: 23 rss: 75Mb L: 41/49 MS: 1 InsertByte- 00:06:31.014 #46 DONE cov: 12444 ft: 15355 corp: 35/1246b lim: 50 exec/s: 23 rss: 75Mb 00:06:31.014 ###### Recommended dictionary. ###### 00:06:31.014 "\377\377" # Uses: 2 00:06:31.014 "\000\037" # Uses: 3 00:06:31.014 "\000\004\000\000\000\000\000\000" # Uses: 0 00:06:31.014 ###### End of recommended dictionary. ###### 00:06:31.014 Done 46 runs in 2 second(s) 00:06:31.014 13:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:06:31.014 13:14:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:31.014 13:14:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:31.014 13:14:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:06:31.014 13:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:06:31.014 13:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:31.014 13:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:31.014 13:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:06:31.014 13:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:06:31.014 13:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:31.014 13:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:31.014 13:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:06:31.014 13:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:06:31.014 13:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:06:31.014 13:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:06:31.014 13:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:31.014 13:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:31.014 13:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:31.014 13:14:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:06:31.014 [2024-10-17 13:14:39.024781] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:31.014 [2024-10-17 13:14:39.024852] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848685 ] 00:06:31.273 [2024-10-17 13:14:39.202912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.273 [2024-10-17 13:14:39.236143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.273 [2024-10-17 13:14:39.294880] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.273 [2024-10-17 13:14:39.311252] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:31.273 INFO: Running with entropic power schedule (0xFF, 100). 00:06:31.273 INFO: Seed: 1893601968 00:06:31.533 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:06:31.533 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:06:31.533 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:06:31.533 INFO: A corpus is not provided, starting from an empty corpus 00:06:31.533 #2 INITED exec/s: 0 rss: 66Mb 00:06:31.533 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:31.533 This may also happen if the target rejected all inputs we tried so far 00:06:31.533 [2024-10-17 13:14:39.358777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:31.533 [2024-10-17 13:14:39.358809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:31.792 NEW_FUNC[1/716]: 0x45def8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:06:31.792 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:31.792 #16 NEW cov: 12275 ft: 12273 corp: 2/36b lim: 90 exec/s: 0 rss: 73Mb L: 35/35 MS: 4 CopyPart-CopyPart-CMP-InsertRepeatedBytes- DE: "\001r\373LNQ\360\214"- 00:06:31.792 [2024-10-17 13:14:39.679890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:31.792 [2024-10-17 13:14:39.679923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:31.792 [2024-10-17 13:14:39.679977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:31.792 [2024-10-17 13:14:39.679994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:31.792 [2024-10-17 13:14:39.680053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:31.792 [2024-10-17 13:14:39.680070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:31.792 #21 NEW cov: 12388 ft: 13524 corp: 3/107b lim: 90 exec/s: 0 rss: 73Mb L: 71/71 MS: 5 InsertByte-InsertByte-ChangeBit-InsertByte-InsertRepeatedBytes- 00:06:31.792 [2024-10-17 13:14:39.719935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:31.792 [2024-10-17 13:14:39.719962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:31.792 [2024-10-17 13:14:39.720003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:31.792 [2024-10-17 13:14:39.720019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:31.792 [2024-10-17 13:14:39.720077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:31.792 [2024-10-17 13:14:39.720094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:31.792 #22 NEW cov: 12394 ft: 13739 corp: 4/178b lim: 90 exec/s: 0 rss: 73Mb L: 71/71 MS: 1 ChangeByte- 00:06:31.792 [2024-10-17 13:14:39.780076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:31.792 [2024-10-17 13:14:39.780103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:31.792 [2024-10-17 13:14:39.780139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:31.792 [2024-10-17 13:14:39.780162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:31.792 [2024-10-17 13:14:39.780221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:31.792 [2024-10-17 13:14:39.780239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:31.792 #27 NEW cov: 12479 ft: 14074 corp: 5/240b lim: 90 exec/s: 0 rss: 73Mb L: 62/71 MS: 5 ShuffleBytes-ChangeByte-CrossOver-ChangeBit-CrossOver- 00:06:31.792 [2024-10-17 13:14:39.820059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:31.792 [2024-10-17 13:14:39.820089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:31.792 [2024-10-17 13:14:39.820146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:31.792 [2024-10-17 13:14:39.820167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.052 #28 NEW cov: 12479 ft: 14631 corp: 6/279b lim: 90 exec/s: 0 rss: 73Mb L: 39/71 MS: 1 CrossOver- 00:06:32.052 [2024-10-17 13:14:39.880583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.052 [2024-10-17 13:14:39.880612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.052 [2024-10-17 13:14:39.880662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:32.052 [2024-10-17 13:14:39.880678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.052 [2024-10-17 13:14:39.880734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:32.052 [2024-10-17 13:14:39.880751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:32.052 [2024-10-17 13:14:39.880810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:32.052 [2024-10-17 13:14:39.880827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:32.052 #29 NEW cov: 12479 ft: 15121 corp: 7/351b lim: 90 exec/s: 0 rss: 73Mb L: 72/72 MS: 1 InsertByte- 00:06:32.052 [2024-10-17 13:14:39.940240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.052 [2024-10-17 13:14:39.940267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.052 #30 NEW cov: 12479 ft: 15223 corp: 8/386b lim: 90 exec/s: 0 rss: 74Mb L: 35/72 MS: 1 PersAutoDict- DE: "\001r\373LNQ\360\214"- 00:06:32.052 [2024-10-17 13:14:39.980816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.052 [2024-10-17 13:14:39.980844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.052 [2024-10-17 13:14:39.980896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:32.052 [2024-10-17 13:14:39.980912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.052 [2024-10-17 13:14:39.980968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:32.052 [2024-10-17 13:14:39.980983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:32.052 [2024-10-17 13:14:39.981041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:32.052 [2024-10-17 13:14:39.981058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:32.052 #31 NEW cov: 12479 ft: 15265 corp: 9/458b lim: 90 exec/s: 0 rss: 74Mb L: 72/72 MS: 1 InsertByte- 00:06:32.052 [2024-10-17 13:14:40.022114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.052 [2024-10-17 13:14:40.022145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.052 [2024-10-17 13:14:40.022194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:32.052 [2024-10-17 13:14:40.022211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.052 [2024-10-17 13:14:40.022272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:32.052 [2024-10-17 13:14:40.022290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:32.052 [2024-10-17 13:14:40.022360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:32.052 [2024-10-17 13:14:40.022378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:32.052 #32 NEW cov: 12479 ft: 15331 corp: 10/530b lim: 90 exec/s: 0 rss: 74Mb L: 72/72 MS: 1 ChangeBit- 00:06:32.052 [2024-10-17 13:14:40.080661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.052 [2024-10-17 13:14:40.080691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.311 #33 NEW cov: 12479 ft: 15442 corp: 11/565b lim: 90 exec/s: 0 rss: 74Mb L: 35/72 MS: 1 ChangeBinInt- 00:06:32.311 [2024-10-17 13:14:40.140999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.311 [2024-10-17 13:14:40.141030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.311 [2024-10-17 13:14:40.141082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:32.311 [2024-10-17 13:14:40.141100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.311 #34 NEW cov: 12479 ft: 15559 corp: 12/604b lim: 90 exec/s: 0 rss: 74Mb L: 39/72 MS: 1 EraseBytes- 00:06:32.311 [2024-10-17 13:14:40.181037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.311 [2024-10-17 13:14:40.181066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.311 [2024-10-17 13:14:40.181104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:32.311 [2024-10-17 13:14:40.181119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.311 #35 NEW cov: 12479 ft: 15621 corp: 13/643b lim: 90 exec/s: 0 rss: 74Mb L: 39/72 MS: 1 PersAutoDict- DE: "\001r\373LNQ\360\214"- 00:06:32.311 [2024-10-17 13:14:40.241253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.311 [2024-10-17 13:14:40.241282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.311 [2024-10-17 13:14:40.241321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:32.311 [2024-10-17 13:14:40.241338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.311 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:32.311 #36 NEW cov: 12502 ft: 15685 corp: 14/682b lim: 90 exec/s: 0 rss: 74Mb L: 39/72 MS: 1 ChangeBit- 00:06:32.311 [2024-10-17 13:14:40.301585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.311 [2024-10-17 13:14:40.301613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.311 [2024-10-17 13:14:40.301662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:32.311 [2024-10-17 13:14:40.301679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.311 [2024-10-17 13:14:40.301737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:32.311 [2024-10-17 13:14:40.301754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:32.311 #37 NEW cov: 12502 ft: 15732 corp: 15/744b lim: 90 exec/s: 37 rss: 74Mb L: 62/72 MS: 1 CopyPart- 00:06:32.311 [2024-10-17 13:14:40.361610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.311 [2024-10-17 13:14:40.361640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.311 [2024-10-17 13:14:40.361700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:32.311 [2024-10-17 13:14:40.361721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.570 #38 NEW cov: 12502 ft: 15762 corp: 16/783b lim: 90 exec/s: 38 rss: 74Mb L: 39/72 MS: 1 CrossOver- 00:06:32.570 [2024-10-17 13:14:40.401712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.570 [2024-10-17 13:14:40.401742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.570 [2024-10-17 13:14:40.401781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:32.570 [2024-10-17 13:14:40.401798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.570 #39 NEW cov: 12502 ft: 15768 corp: 17/826b lim: 90 exec/s: 39 rss: 74Mb L: 43/72 MS: 1 PersAutoDict- DE: "\001r\373LNQ\360\214"- 00:06:32.570 [2024-10-17 13:14:40.461851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.570 [2024-10-17 13:14:40.461880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.570 [2024-10-17 13:14:40.461917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:32.570 [2024-10-17 13:14:40.461934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.570 #40 NEW cov: 12502 ft: 15840 corp: 18/866b lim: 90 exec/s: 40 rss: 74Mb L: 40/72 MS: 1 InsertByte- 00:06:32.570 [2024-10-17 13:14:40.522018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.570 [2024-10-17 13:14:40.522047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.570 [2024-10-17 13:14:40.522106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:32.570 [2024-10-17 13:14:40.522123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.570 #41 NEW cov: 12502 ft: 15866 corp: 19/909b lim: 90 exec/s: 41 rss: 74Mb L: 43/72 MS: 1 CMP- DE: "7\202\015\373L\373r\000"- 00:06:32.570 [2024-10-17 13:14:40.562411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.570 [2024-10-17 13:14:40.562439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.570 [2024-10-17 13:14:40.562489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:32.570 [2024-10-17 13:14:40.562504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.570 [2024-10-17 13:14:40.562560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:32.570 [2024-10-17 13:14:40.562580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:32.570 [2024-10-17 13:14:40.562637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:32.570 [2024-10-17 13:14:40.562654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:32.570 #42 NEW cov: 12502 ft: 15891 corp: 20/981b lim: 90 exec/s: 42 rss: 74Mb L: 72/72 MS: 1 ChangeBinInt- 00:06:32.829 [2024-10-17 13:14:40.622444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.829 [2024-10-17 13:14:40.622472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.829 [2024-10-17 13:14:40.622518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:32.829 [2024-10-17 13:14:40.622536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.829 [2024-10-17 13:14:40.622593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:32.829 [2024-10-17 13:14:40.622610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:32.829 #43 NEW cov: 12502 ft: 15900 corp: 21/1049b lim: 90 exec/s: 43 rss: 75Mb L: 68/72 MS: 1 InsertRepeatedBytes- 00:06:32.829 [2024-10-17 13:14:40.662715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.829 [2024-10-17 13:14:40.662745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.829 [2024-10-17 13:14:40.662796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:32.829 [2024-10-17 13:14:40.662814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.829 [2024-10-17 13:14:40.662869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:32.829 [2024-10-17 13:14:40.662887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:32.829 [2024-10-17 13:14:40.662943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:32.829 [2024-10-17 13:14:40.662960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:32.829 #44 NEW cov: 12502 ft: 15930 corp: 22/1126b lim: 90 exec/s: 44 rss: 75Mb L: 77/77 MS: 1 CopyPart- 00:06:32.829 [2024-10-17 13:14:40.702651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.829 [2024-10-17 13:14:40.702679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.829 [2024-10-17 13:14:40.702728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:32.829 [2024-10-17 13:14:40.702745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.829 [2024-10-17 13:14:40.702804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:32.829 [2024-10-17 13:14:40.702821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:32.829 #45 NEW cov: 12502 ft: 15949 corp: 23/1188b lim: 90 exec/s: 45 rss: 75Mb L: 62/77 MS: 1 CopyPart- 00:06:32.829 [2024-10-17 13:14:40.742595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.829 [2024-10-17 13:14:40.742623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.829 [2024-10-17 13:14:40.742675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:32.829 [2024-10-17 13:14:40.742693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.829 #46 NEW cov: 12502 ft: 15986 corp: 24/1228b lim: 90 exec/s: 46 rss: 75Mb L: 40/77 MS: 1 InsertByte- 00:06:32.829 [2024-10-17 13:14:40.782700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.829 [2024-10-17 13:14:40.782728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.829 [2024-10-17 13:14:40.782767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:32.829 [2024-10-17 13:14:40.782783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.829 #47 NEW cov: 12502 ft: 15998 corp: 25/1281b lim: 90 exec/s: 47 rss: 75Mb L: 53/77 MS: 1 InsertRepeatedBytes- 00:06:32.829 [2024-10-17 13:14:40.823168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.829 [2024-10-17 13:14:40.823194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:32.829 [2024-10-17 13:14:40.823254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:32.829 [2024-10-17 13:14:40.823269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:32.829 [2024-10-17 13:14:40.823326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:32.829 [2024-10-17 13:14:40.823343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:32.829 [2024-10-17 13:14:40.823400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:32.829 [2024-10-17 13:14:40.823417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:32.829 #48 NEW cov: 12502 ft: 16005 corp: 26/1358b lim: 90 exec/s: 48 rss: 75Mb L: 77/77 MS: 1 CopyPart- 00:06:32.829 [2024-10-17 13:14:40.862802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:32.829 [2024-10-17 13:14:40.862831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.088 #49 NEW cov: 12502 ft: 16029 corp: 27/1393b lim: 90 exec/s: 49 rss: 75Mb L: 35/77 MS: 1 ShuffleBytes- 00:06:33.088 [2024-10-17 13:14:40.902940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:33.088 [2024-10-17 13:14:40.902967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.088 #50 NEW cov: 12502 ft: 16047 corp: 28/1425b lim: 90 exec/s: 50 rss: 75Mb L: 32/77 MS: 1 EraseBytes- 00:06:33.088 [2024-10-17 13:14:40.963174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:33.088 [2024-10-17 13:14:40.963202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.088 #51 NEW cov: 12502 ft: 16056 corp: 29/1460b lim: 90 exec/s: 51 rss: 75Mb L: 35/77 MS: 1 ChangeBinInt- 00:06:33.088 [2024-10-17 13:14:41.003708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:33.088 [2024-10-17 13:14:41.003736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.088 [2024-10-17 13:14:41.003785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:33.088 [2024-10-17 13:14:41.003802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:33.088 [2024-10-17 13:14:41.003858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:33.088 [2024-10-17 13:14:41.003875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:33.088 [2024-10-17 13:14:41.003931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:33.088 [2024-10-17 13:14:41.003947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:33.088 #52 NEW cov: 12502 ft: 16058 corp: 30/1537b lim: 90 exec/s: 52 rss: 75Mb L: 77/77 MS: 1 CMP- DE: "\000r\373MC`\031\234"- 00:06:33.088 [2024-10-17 13:14:41.063398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:33.088 [2024-10-17 13:14:41.063426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.088 #53 NEW cov: 12502 ft: 16105 corp: 31/1572b lim: 90 exec/s: 53 rss: 75Mb L: 35/77 MS: 1 ChangeByte- 00:06:33.088 [2024-10-17 13:14:41.103508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:33.088 [2024-10-17 13:14:41.103535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.348 #54 NEW cov: 12502 ft: 16108 corp: 32/1607b lim: 90 exec/s: 54 rss: 75Mb L: 35/77 MS: 1 ChangeBinInt- 00:06:33.348 [2024-10-17 13:14:41.163890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:33.348 [2024-10-17 13:14:41.163917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.348 [2024-10-17 13:14:41.163957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:33.348 [2024-10-17 13:14:41.163974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:33.348 #55 NEW cov: 12502 ft: 16126 corp: 33/1650b lim: 90 exec/s: 55 rss: 75Mb L: 43/77 MS: 1 PersAutoDict- DE: "7\202\015\373L\373r\000"- 00:06:33.348 [2024-10-17 13:14:41.224549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:33.348 [2024-10-17 13:14:41.224578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.348 [2024-10-17 13:14:41.224638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:33.348 [2024-10-17 13:14:41.224655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:33.348 [2024-10-17 13:14:41.224711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:33.348 [2024-10-17 13:14:41.224727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:33.348 [2024-10-17 13:14:41.224782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:33.348 [2024-10-17 13:14:41.224799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:33.348 [2024-10-17 13:14:41.224855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:06:33.348 [2024-10-17 13:14:41.224872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:33.348 #56 NEW cov: 12502 ft: 16172 corp: 34/1740b lim: 90 exec/s: 56 rss: 75Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:06:33.348 [2024-10-17 13:14:41.264155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:33.348 [2024-10-17 13:14:41.264183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.348 [2024-10-17 13:14:41.264242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:33.348 [2024-10-17 13:14:41.264259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:33.348 #57 NEW cov: 12502 ft: 16210 corp: 35/1780b lim: 90 exec/s: 57 rss: 75Mb L: 40/90 MS: 1 ChangeBit- 00:06:33.348 [2024-10-17 13:14:41.324606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:33.348 [2024-10-17 13:14:41.324634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.348 [2024-10-17 13:14:41.324687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:33.348 [2024-10-17 13:14:41.324703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:33.348 [2024-10-17 13:14:41.324762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:33.348 [2024-10-17 13:14:41.324795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:33.348 [2024-10-17 13:14:41.324854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:33.348 [2024-10-17 13:14:41.324872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:33.348 #58 NEW cov: 12502 ft: 16211 corp: 36/1856b lim: 90 exec/s: 29 rss: 75Mb L: 76/90 MS: 1 InsertRepeatedBytes- 00:06:33.348 #58 DONE cov: 12502 ft: 16211 corp: 36/1856b lim: 90 exec/s: 29 rss: 75Mb 00:06:33.348 ###### Recommended dictionary. ###### 00:06:33.348 "\001r\373LNQ\360\214" # Uses: 3 00:06:33.348 "7\202\015\373L\373r\000" # Uses: 1 00:06:33.348 "\000r\373MC`\031\234" # Uses: 0 00:06:33.348 ###### End of recommended dictionary. ###### 00:06:33.348 Done 58 runs in 2 second(s) 00:06:33.608 13:14:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:06:33.608 13:14:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:33.608 13:14:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:33.608 13:14:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:06:33.608 13:14:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:06:33.608 13:14:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:33.608 13:14:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:33.608 13:14:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:06:33.608 13:14:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:06:33.608 13:14:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:33.608 13:14:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:33.608 13:14:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:06:33.608 13:14:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:06:33.608 13:14:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:06:33.608 13:14:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:06:33.608 13:14:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:33.608 13:14:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:33.608 13:14:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:33.608 13:14:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:06:33.608 [2024-10-17 13:14:41.497530] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:33.608 [2024-10-17 13:14:41.497620] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849038 ] 00:06:33.866 [2024-10-17 13:14:41.671976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.866 [2024-10-17 13:14:41.706129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.866 [2024-10-17 13:14:41.765024] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.866 [2024-10-17 13:14:41.781393] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:06:33.866 INFO: Running with entropic power schedule (0xFF, 100). 00:06:33.866 INFO: Seed: 68617205 00:06:33.866 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:06:33.866 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:06:33.866 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:06:33.866 INFO: A corpus is not provided, starting from an empty corpus 00:06:33.866 #2 INITED exec/s: 0 rss: 65Mb 00:06:33.866 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:33.866 This may also happen if the target rejected all inputs we tried so far 00:06:33.866 [2024-10-17 13:14:41.850917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:33.866 [2024-10-17 13:14:41.850960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:33.866 [2024-10-17 13:14:41.851103] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:33.866 [2024-10-17 13:14:41.851123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:34.125 NEW_FUNC[1/716]: 0x461128 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:06:34.125 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:34.125 #6 NEW cov: 12248 ft: 12243 corp: 2/21b lim: 50 exec/s: 0 rss: 73Mb L: 20/20 MS: 4 CrossOver-ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:06:34.383 [2024-10-17 13:14:42.201301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:34.383 [2024-10-17 13:14:42.201345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:34.383 #25 NEW cov: 12363 ft: 13541 corp: 3/36b lim: 50 exec/s: 0 rss: 73Mb L: 15/20 MS: 4 InsertByte-InsertByte-ChangeBit-InsertRepeatedBytes- 00:06:34.383 [2024-10-17 13:14:42.251432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:34.383 [2024-10-17 13:14:42.251459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:34.383 #26 NEW cov: 12369 ft: 13726 corp: 4/51b lim: 50 exec/s: 0 rss: 73Mb L: 15/20 MS: 1 ChangeASCIIInt- 00:06:34.383 [2024-10-17 13:14:42.321628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:34.383 [2024-10-17 13:14:42.321660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:34.383 #27 NEW cov: 12454 ft: 14058 corp: 5/67b lim: 50 exec/s: 0 rss: 73Mb L: 16/20 MS: 1 InsertByte- 00:06:34.383 [2024-10-17 13:14:42.372095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:34.383 [2024-10-17 13:14:42.372131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:34.383 [2024-10-17 13:14:42.372250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:34.383 [2024-10-17 13:14:42.372273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:34.383 #28 NEW cov: 12454 ft: 14206 corp: 6/87b lim: 50 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 ShuffleBytes- 00:06:34.642 [2024-10-17 13:14:42.442842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:34.642 [2024-10-17 13:14:42.442880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:34.642 [2024-10-17 13:14:42.442981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:34.642 [2024-10-17 13:14:42.443004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:34.642 [2024-10-17 13:14:42.443128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:34.642 [2024-10-17 13:14:42.443159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:34.642 [2024-10-17 13:14:42.443284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:34.642 [2024-10-17 13:14:42.443306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:34.642 #29 NEW cov: 12454 ft: 14614 corp: 7/127b lim: 50 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 CrossOver- 00:06:34.642 [2024-10-17 13:14:42.512373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:34.642 [2024-10-17 13:14:42.512401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:34.642 #30 NEW cov: 12454 ft: 14679 corp: 8/142b lim: 50 exec/s: 0 rss: 73Mb L: 15/40 MS: 1 ChangeBit- 00:06:34.642 [2024-10-17 13:14:42.582957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:34.642 [2024-10-17 13:14:42.582992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:34.642 [2024-10-17 13:14:42.583117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:34.642 [2024-10-17 13:14:42.583138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:34.642 [2024-10-17 13:14:42.583169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:34.642 [2024-10-17 13:14:42.583193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:34.642 #31 NEW cov: 12454 ft: 14931 corp: 9/174b lim: 50 exec/s: 0 rss: 73Mb L: 32/40 MS: 1 InsertRepeatedBytes- 00:06:34.642 [2024-10-17 13:14:42.633327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:34.642 [2024-10-17 13:14:42.633368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:34.642 [2024-10-17 13:14:42.633475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:34.642 [2024-10-17 13:14:42.633501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:34.642 [2024-10-17 13:14:42.633620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:34.642 [2024-10-17 13:14:42.633643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:34.642 [2024-10-17 13:14:42.633761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:34.642 [2024-10-17 13:14:42.633782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:34.642 #32 NEW cov: 12454 ft: 14980 corp: 10/214b lim: 50 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 ChangeBinInt- 00:06:34.902 [2024-10-17 13:14:42.703583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:34.902 [2024-10-17 13:14:42.703617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:34.902 [2024-10-17 13:14:42.703704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:34.902 [2024-10-17 13:14:42.703721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:34.902 [2024-10-17 13:14:42.703838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:34.902 [2024-10-17 13:14:42.703863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:34.902 [2024-10-17 13:14:42.703985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:34.902 [2024-10-17 13:14:42.704010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:34.902 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:34.902 #33 NEW cov: 12477 ft: 15062 corp: 11/254b lim: 50 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:06:34.902 [2024-10-17 13:14:42.773272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:34.902 [2024-10-17 13:14:42.773319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:34.902 [2024-10-17 13:14:42.773441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:34.902 [2024-10-17 13:14:42.773465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:34.902 #34 NEW cov: 12477 ft: 15111 corp: 12/274b lim: 50 exec/s: 0 rss: 74Mb L: 20/40 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:34.902 [2024-10-17 13:14:42.823140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:34.902 [2024-10-17 13:14:42.823171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:34.902 #37 NEW cov: 12477 ft: 15179 corp: 13/289b lim: 50 exec/s: 37 rss: 74Mb L: 15/40 MS: 3 ChangeBit-ChangeByte-CrossOver- 00:06:34.902 [2024-10-17 13:14:42.863535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:34.902 [2024-10-17 13:14:42.863570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:34.902 [2024-10-17 13:14:42.863700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:34.902 [2024-10-17 13:14:42.863723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:34.902 #38 NEW cov: 12477 ft: 15274 corp: 14/309b lim: 50 exec/s: 38 rss: 74Mb L: 20/40 MS: 1 CrossOver- 00:06:34.902 [2024-10-17 13:14:42.913449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:34.902 [2024-10-17 13:14:42.913476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.161 #39 NEW cov: 12477 ft: 15296 corp: 15/325b lim: 50 exec/s: 39 rss: 74Mb L: 16/40 MS: 1 InsertByte- 00:06:35.161 [2024-10-17 13:14:42.983914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:35.161 [2024-10-17 13:14:42.983947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.161 [2024-10-17 13:14:42.984076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:35.161 [2024-10-17 13:14:42.984100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.161 #40 NEW cov: 12477 ft: 15320 corp: 16/345b lim: 50 exec/s: 40 rss: 74Mb L: 20/40 MS: 1 CMP- DE: "\377\377\377\365"- 00:06:35.161 [2024-10-17 13:14:43.054195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:35.161 [2024-10-17 13:14:43.054227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.161 [2024-10-17 13:14:43.054319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:35.161 [2024-10-17 13:14:43.054353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.161 #41 NEW cov: 12477 ft: 15325 corp: 17/365b lim: 50 exec/s: 41 rss: 74Mb L: 20/40 MS: 1 ShuffleBytes- 00:06:35.161 [2024-10-17 13:14:43.104726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:35.161 [2024-10-17 13:14:43.104762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.161 [2024-10-17 13:14:43.104879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:35.161 [2024-10-17 13:14:43.104905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.161 [2024-10-17 13:14:43.105023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:35.161 [2024-10-17 13:14:43.105048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:35.161 [2024-10-17 13:14:43.105176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:35.161 [2024-10-17 13:14:43.105198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:35.161 #42 NEW cov: 12477 ft: 15471 corp: 18/405b lim: 50 exec/s: 42 rss: 74Mb L: 40/40 MS: 1 ChangeBit- 00:06:35.161 [2024-10-17 13:14:43.174178] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:35.161 [2024-10-17 13:14:43.174205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.161 #43 NEW cov: 12477 ft: 15478 corp: 19/421b lim: 50 exec/s: 43 rss: 74Mb L: 16/40 MS: 1 InsertByte- 00:06:35.420 [2024-10-17 13:14:43.224549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:35.420 [2024-10-17 13:14:43.224586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.420 [2024-10-17 13:14:43.224701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:35.420 [2024-10-17 13:14:43.224725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.420 #44 NEW cov: 12477 ft: 15501 corp: 20/441b lim: 50 exec/s: 44 rss: 74Mb L: 20/40 MS: 1 ChangeBinInt- 00:06:35.420 [2024-10-17 13:14:43.274430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:35.421 [2024-10-17 13:14:43.274458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.421 #45 NEW cov: 12477 ft: 15548 corp: 21/457b lim: 50 exec/s: 45 rss: 74Mb L: 16/40 MS: 1 PersAutoDict- DE: "\377\377\377\365"- 00:06:35.421 [2024-10-17 13:14:43.344972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:35.421 [2024-10-17 13:14:43.345000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.421 [2024-10-17 13:14:43.345133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:35.421 [2024-10-17 13:14:43.345158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.421 #46 NEW cov: 12477 ft: 15568 corp: 22/477b lim: 50 exec/s: 46 rss: 74Mb L: 20/40 MS: 1 ChangeBit- 00:06:35.421 [2024-10-17 13:14:43.394928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:35.421 [2024-10-17 13:14:43.394965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.421 #47 NEW cov: 12477 ft: 15583 corp: 23/494b lim: 50 exec/s: 47 rss: 74Mb L: 17/40 MS: 1 InsertByte- 00:06:35.421 [2024-10-17 13:14:43.465005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:35.421 [2024-10-17 13:14:43.465036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.680 #48 NEW cov: 12477 ft: 15614 corp: 24/510b lim: 50 exec/s: 48 rss: 74Mb L: 16/40 MS: 1 ShuffleBytes- 00:06:35.680 [2024-10-17 13:14:43.515531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:35.680 [2024-10-17 13:14:43.515558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.680 [2024-10-17 13:14:43.515695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:35.680 [2024-10-17 13:14:43.515720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.680 #49 NEW cov: 12477 ft: 15633 corp: 25/531b lim: 50 exec/s: 49 rss: 74Mb L: 21/40 MS: 1 InsertByte- 00:06:35.680 [2024-10-17 13:14:43.585742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:35.680 [2024-10-17 13:14:43.585777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.680 [2024-10-17 13:14:43.585895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:35.680 [2024-10-17 13:14:43.585920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.680 #55 NEW cov: 12477 ft: 15658 corp: 26/551b lim: 50 exec/s: 55 rss: 74Mb L: 20/40 MS: 1 ChangeByte- 00:06:35.680 [2024-10-17 13:14:43.635638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:35.680 [2024-10-17 13:14:43.635666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.680 #56 NEW cov: 12477 ft: 15734 corp: 27/567b lim: 50 exec/s: 56 rss: 74Mb L: 16/40 MS: 1 ChangeBit- 00:06:35.680 [2024-10-17 13:14:43.686123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:35.680 [2024-10-17 13:14:43.686159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.680 [2024-10-17 13:14:43.686274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:35.680 [2024-10-17 13:14:43.686299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.680 #57 NEW cov: 12477 ft: 15782 corp: 28/595b lim: 50 exec/s: 57 rss: 74Mb L: 28/40 MS: 1 EraseBytes- 00:06:35.939 [2024-10-17 13:14:43.735962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:35.939 [2024-10-17 13:14:43.735998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.939 #58 NEW cov: 12477 ft: 15803 corp: 29/611b lim: 50 exec/s: 58 rss: 75Mb L: 16/40 MS: 1 ChangeBinInt- 00:06:35.939 [2024-10-17 13:14:43.806417] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:35.939 [2024-10-17 13:14:43.806455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.939 [2024-10-17 13:14:43.806581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:35.939 [2024-10-17 13:14:43.806608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.939 #59 NEW cov: 12477 ft: 15804 corp: 30/631b lim: 50 exec/s: 29 rss: 75Mb L: 20/40 MS: 1 ShuffleBytes- 00:06:35.939 #59 DONE cov: 12477 ft: 15804 corp: 30/631b lim: 50 exec/s: 29 rss: 75Mb 00:06:35.939 ###### Recommended dictionary. ###### 00:06:35.939 "\000\000\000\000\000\000\000\000" # Uses: 1 00:06:35.939 "\377\377\377\365" # Uses: 1 00:06:35.939 ###### End of recommended dictionary. ###### 00:06:35.939 Done 59 runs in 2 second(s) 00:06:35.939 13:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:06:35.939 13:14:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:35.939 13:14:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:35.939 13:14:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:06:35.939 13:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:06:35.939 13:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:35.939 13:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:35.939 13:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:06:35.939 13:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:06:35.939 13:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:35.939 13:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:35.939 13:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:06:35.939 13:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:06:35.939 13:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:06:35.940 13:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:06:35.940 13:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:35.940 13:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:35.940 13:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:35.940 13:14:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:06:36.199 [2024-10-17 13:14:43.996777] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:36.199 [2024-10-17 13:14:43.996841] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849572 ] 00:06:36.199 [2024-10-17 13:14:44.170279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.199 [2024-10-17 13:14:44.203218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.457 [2024-10-17 13:14:44.261902] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.457 [2024-10-17 13:14:44.278255] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:06:36.457 INFO: Running with entropic power schedule (0xFF, 100). 00:06:36.457 INFO: Seed: 2567616915 00:06:36.457 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:06:36.457 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:06:36.457 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:06:36.457 INFO: A corpus is not provided, starting from an empty corpus 00:06:36.457 #2 INITED exec/s: 0 rss: 65Mb 00:06:36.457 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:36.457 This may also happen if the target rejected all inputs we tried so far 00:06:36.457 [2024-10-17 13:14:44.354350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:36.457 [2024-10-17 13:14:44.354385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.457 [2024-10-17 13:14:44.354512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:36.457 [2024-10-17 13:14:44.354536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.716 NEW_FUNC[1/716]: 0x4633f8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:06:36.716 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:36.716 #8 NEW cov: 12258 ft: 12258 corp: 2/39b lim: 85 exec/s: 0 rss: 73Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:06:36.716 [2024-10-17 13:14:44.685785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:36.716 [2024-10-17 13:14:44.685834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.716 [2024-10-17 13:14:44.685960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:36.716 [2024-10-17 13:14:44.685988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.716 [2024-10-17 13:14:44.686122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:36.716 [2024-10-17 13:14:44.686159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:36.716 #9 NEW cov: 12388 ft: 13285 corp: 3/91b lim: 85 exec/s: 0 rss: 73Mb L: 52/52 MS: 1 CopyPart- 00:06:36.716 [2024-10-17 13:14:44.756139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:36.716 [2024-10-17 13:14:44.756181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.716 [2024-10-17 13:14:44.756291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:36.716 [2024-10-17 13:14:44.756315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.716 [2024-10-17 13:14:44.756438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:36.716 [2024-10-17 13:14:44.756462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:36.716 [2024-10-17 13:14:44.756586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:36.716 [2024-10-17 13:14:44.756612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:36.976 #14 NEW cov: 12394 ft: 13816 corp: 4/173b lim: 85 exec/s: 0 rss: 73Mb L: 82/82 MS: 5 ChangeByte-ChangeByte-ChangeBit-InsertByte-InsertRepeatedBytes- 00:06:36.976 [2024-10-17 13:14:44.806517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:36.976 [2024-10-17 13:14:44.806550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.976 [2024-10-17 13:14:44.806643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:36.976 [2024-10-17 13:14:44.806665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.976 [2024-10-17 13:14:44.806787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:36.976 [2024-10-17 13:14:44.806815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:36.976 [2024-10-17 13:14:44.806929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:36.976 [2024-10-17 13:14:44.806956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:36.976 [2024-10-17 13:14:44.807081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:06:36.976 [2024-10-17 13:14:44.807103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:36.976 #15 NEW cov: 12479 ft: 14022 corp: 5/258b lim: 85 exec/s: 0 rss: 73Mb L: 85/85 MS: 1 InsertRepeatedBytes- 00:06:36.976 [2024-10-17 13:14:44.876203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:36.976 [2024-10-17 13:14:44.876241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.976 [2024-10-17 13:14:44.876362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:36.976 [2024-10-17 13:14:44.876391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.976 [2024-10-17 13:14:44.876514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:36.976 [2024-10-17 13:14:44.876539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:36.976 #16 NEW cov: 12479 ft: 14143 corp: 6/310b lim: 85 exec/s: 0 rss: 73Mb L: 52/85 MS: 1 ChangeByte- 00:06:36.976 [2024-10-17 13:14:44.926872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:36.976 [2024-10-17 13:14:44.926905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.976 [2024-10-17 13:14:44.927001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:36.976 [2024-10-17 13:14:44.927019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.976 [2024-10-17 13:14:44.927138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:36.976 [2024-10-17 13:14:44.927168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:36.976 [2024-10-17 13:14:44.927295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:36.976 [2024-10-17 13:14:44.927319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:36.976 [2024-10-17 13:14:44.927445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:06:36.976 [2024-10-17 13:14:44.927471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:36.976 #17 NEW cov: 12479 ft: 14205 corp: 7/395b lim: 85 exec/s: 0 rss: 73Mb L: 85/85 MS: 1 CopyPart- 00:06:36.976 [2024-10-17 13:14:44.997104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:36.976 [2024-10-17 13:14:44.997138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.976 [2024-10-17 13:14:44.997225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:36.976 [2024-10-17 13:14:44.997246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.976 [2024-10-17 13:14:44.997370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:36.976 [2024-10-17 13:14:44.997394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:36.976 [2024-10-17 13:14:44.997521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:36.976 [2024-10-17 13:14:44.997545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:36.976 [2024-10-17 13:14:44.997673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:06:36.976 [2024-10-17 13:14:44.997696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:36.976 #18 NEW cov: 12479 ft: 14320 corp: 8/480b lim: 85 exec/s: 0 rss: 73Mb L: 85/85 MS: 1 ChangeBit- 00:06:37.236 [2024-10-17 13:14:45.046765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:37.236 [2024-10-17 13:14:45.046803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.236 [2024-10-17 13:14:45.046922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:37.236 [2024-10-17 13:14:45.046950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.236 [2024-10-17 13:14:45.047080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:37.236 [2024-10-17 13:14:45.047105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:37.236 #19 NEW cov: 12479 ft: 14340 corp: 9/532b lim: 85 exec/s: 0 rss: 73Mb L: 52/85 MS: 1 CopyPart- 00:06:37.236 [2024-10-17 13:14:45.096637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:37.236 [2024-10-17 13:14:45.096664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.236 [2024-10-17 13:14:45.096791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:37.236 [2024-10-17 13:14:45.096816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.236 #20 NEW cov: 12479 ft: 14400 corp: 10/566b lim: 85 exec/s: 0 rss: 73Mb L: 34/85 MS: 1 EraseBytes- 00:06:37.236 [2024-10-17 13:14:45.146812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:37.236 [2024-10-17 13:14:45.146843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.236 [2024-10-17 13:14:45.146968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:37.236 [2024-10-17 13:14:45.146996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.236 #21 NEW cov: 12479 ft: 14444 corp: 11/604b lim: 85 exec/s: 0 rss: 73Mb L: 38/85 MS: 1 ChangeBit- 00:06:37.236 [2024-10-17 13:14:45.197758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:37.236 [2024-10-17 13:14:45.197793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.236 [2024-10-17 13:14:45.197880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:37.236 [2024-10-17 13:14:45.197904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.236 [2024-10-17 13:14:45.198028] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:37.236 [2024-10-17 13:14:45.198052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:37.236 [2024-10-17 13:14:45.198177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:37.236 [2024-10-17 13:14:45.198202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:37.236 [2024-10-17 13:14:45.198326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:06:37.236 [2024-10-17 13:14:45.198351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:37.236 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:37.236 #22 NEW cov: 12502 ft: 14502 corp: 12/689b lim: 85 exec/s: 0 rss: 74Mb L: 85/85 MS: 1 ChangeBinInt- 00:06:37.236 [2024-10-17 13:14:45.247436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:37.236 [2024-10-17 13:14:45.247464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.236 [2024-10-17 13:14:45.247588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:37.236 [2024-10-17 13:14:45.247612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.236 [2024-10-17 13:14:45.247737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:37.236 [2024-10-17 13:14:45.247763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:37.496 #23 NEW cov: 12502 ft: 14546 corp: 13/744b lim: 85 exec/s: 0 rss: 74Mb L: 55/85 MS: 1 InsertRepeatedBytes- 00:06:37.496 [2024-10-17 13:14:45.317333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:37.496 [2024-10-17 13:14:45.317370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.496 [2024-10-17 13:14:45.317472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:37.496 [2024-10-17 13:14:45.317495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.496 #24 NEW cov: 12502 ft: 14554 corp: 14/790b lim: 85 exec/s: 24 rss: 74Mb L: 46/85 MS: 1 EraseBytes- 00:06:37.496 [2024-10-17 13:14:45.388277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:37.496 [2024-10-17 13:14:45.388309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.496 [2024-10-17 13:14:45.388395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:37.496 [2024-10-17 13:14:45.388418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.496 [2024-10-17 13:14:45.388541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:37.496 [2024-10-17 13:14:45.388568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:37.496 [2024-10-17 13:14:45.388698] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:37.497 [2024-10-17 13:14:45.388723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:37.497 [2024-10-17 13:14:45.388844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:06:37.497 [2024-10-17 13:14:45.388870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:37.497 #25 NEW cov: 12502 ft: 14600 corp: 15/875b lim: 85 exec/s: 25 rss: 74Mb L: 85/85 MS: 1 CopyPart- 00:06:37.497 [2024-10-17 13:14:45.437866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:37.497 [2024-10-17 13:14:45.437901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.497 [2024-10-17 13:14:45.438003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:37.497 [2024-10-17 13:14:45.438024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.497 [2024-10-17 13:14:45.438146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:37.497 [2024-10-17 13:14:45.438176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:37.497 #26 NEW cov: 12502 ft: 14620 corp: 16/930b lim: 85 exec/s: 26 rss: 74Mb L: 55/85 MS: 1 ChangeBit- 00:06:37.497 [2024-10-17 13:14:45.498273] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:37.497 [2024-10-17 13:14:45.498307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.497 [2024-10-17 13:14:45.498444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:37.497 [2024-10-17 13:14:45.498465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.497 [2024-10-17 13:14:45.498590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:37.497 [2024-10-17 13:14:45.498616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:37.497 #27 NEW cov: 12502 ft: 14696 corp: 17/996b lim: 85 exec/s: 27 rss: 74Mb L: 66/85 MS: 1 CrossOver- 00:06:37.756 [2024-10-17 13:14:45.568316] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:37.756 [2024-10-17 13:14:45.568351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.756 [2024-10-17 13:14:45.568474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:37.756 [2024-10-17 13:14:45.568501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.756 [2024-10-17 13:14:45.568620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:37.757 [2024-10-17 13:14:45.568643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:37.757 #28 NEW cov: 12502 ft: 14731 corp: 18/1048b lim: 85 exec/s: 28 rss: 74Mb L: 52/85 MS: 1 ShuffleBytes- 00:06:37.757 [2024-10-17 13:14:45.618517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:37.757 [2024-10-17 13:14:45.618545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.757 [2024-10-17 13:14:45.618666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:37.757 [2024-10-17 13:14:45.618689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.757 [2024-10-17 13:14:45.618808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:37.757 [2024-10-17 13:14:45.618832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:37.757 #29 NEW cov: 12502 ft: 14772 corp: 19/1100b lim: 85 exec/s: 29 rss: 74Mb L: 52/85 MS: 1 ChangeBinInt- 00:06:37.757 [2024-10-17 13:14:45.668373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:37.757 [2024-10-17 13:14:45.668405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.757 [2024-10-17 13:14:45.668524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:37.757 [2024-10-17 13:14:45.668549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.757 #30 NEW cov: 12502 ft: 14794 corp: 20/1138b lim: 85 exec/s: 30 rss: 74Mb L: 38/85 MS: 1 ChangeBinInt- 00:06:37.757 [2024-10-17 13:14:45.708805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:37.757 [2024-10-17 13:14:45.708840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.757 [2024-10-17 13:14:45.708971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:37.757 [2024-10-17 13:14:45.708996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.757 [2024-10-17 13:14:45.709121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:37.757 [2024-10-17 13:14:45.709146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:37.757 #31 NEW cov: 12502 ft: 14797 corp: 21/1205b lim: 85 exec/s: 31 rss: 74Mb L: 67/85 MS: 1 CopyPart- 00:06:37.757 [2024-10-17 13:14:45.778677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:37.757 [2024-10-17 13:14:45.778706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.757 [2024-10-17 13:14:45.778841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:37.757 [2024-10-17 13:14:45.778867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.757 #32 NEW cov: 12502 ft: 14868 corp: 22/1239b lim: 85 exec/s: 32 rss: 74Mb L: 34/85 MS: 1 ShuffleBytes- 00:06:38.016 [2024-10-17 13:14:45.829131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:38.016 [2024-10-17 13:14:45.829173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.016 [2024-10-17 13:14:45.829291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:38.016 [2024-10-17 13:14:45.829314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.016 [2024-10-17 13:14:45.829442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:38.016 [2024-10-17 13:14:45.829465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.016 #33 NEW cov: 12502 ft: 14914 corp: 23/1294b lim: 85 exec/s: 33 rss: 74Mb L: 55/85 MS: 1 ShuffleBytes- 00:06:38.016 [2024-10-17 13:14:45.879327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:38.016 [2024-10-17 13:14:45.879362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.016 [2024-10-17 13:14:45.879486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:38.016 [2024-10-17 13:14:45.879511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.016 [2024-10-17 13:14:45.879633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:38.016 [2024-10-17 13:14:45.879656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.016 #34 NEW cov: 12502 ft: 14928 corp: 24/1346b lim: 85 exec/s: 34 rss: 74Mb L: 52/85 MS: 1 ShuffleBytes- 00:06:38.016 [2024-10-17 13:14:45.949556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:38.016 [2024-10-17 13:14:45.949589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.016 [2024-10-17 13:14:45.949707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:38.016 [2024-10-17 13:14:45.949730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.016 [2024-10-17 13:14:45.949859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:38.016 [2024-10-17 13:14:45.949883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.016 #35 NEW cov: 12502 ft: 14949 corp: 25/1399b lim: 85 exec/s: 35 rss: 74Mb L: 53/85 MS: 1 InsertByte- 00:06:38.016 [2024-10-17 13:14:46.019747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:38.016 [2024-10-17 13:14:46.019782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.016 [2024-10-17 13:14:46.019907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:38.016 [2024-10-17 13:14:46.019930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.016 [2024-10-17 13:14:46.020056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:38.016 [2024-10-17 13:14:46.020076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.016 #36 NEW cov: 12502 ft: 14958 corp: 26/1452b lim: 85 exec/s: 36 rss: 74Mb L: 53/85 MS: 1 InsertByte- 00:06:38.275 [2024-10-17 13:14:46.069854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:38.275 [2024-10-17 13:14:46.069886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.275 [2024-10-17 13:14:46.069976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:38.275 [2024-10-17 13:14:46.070001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.275 [2024-10-17 13:14:46.070125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:38.275 [2024-10-17 13:14:46.070147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.275 #37 NEW cov: 12502 ft: 14986 corp: 27/1504b lim: 85 exec/s: 37 rss: 74Mb L: 52/85 MS: 1 ChangeByte- 00:06:38.275 [2024-10-17 13:14:46.119809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:38.275 [2024-10-17 13:14:46.119843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.275 [2024-10-17 13:14:46.119962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:38.275 [2024-10-17 13:14:46.119982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.275 [2024-10-17 13:14:46.120110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:38.275 [2024-10-17 13:14:46.120136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.275 #38 NEW cov: 12502 ft: 15046 corp: 28/1557b lim: 85 exec/s: 38 rss: 74Mb L: 53/85 MS: 1 ChangeBinInt- 00:06:38.276 [2024-10-17 13:14:46.190282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:38.276 [2024-10-17 13:14:46.190314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.276 [2024-10-17 13:14:46.190402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:38.276 [2024-10-17 13:14:46.190426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.276 [2024-10-17 13:14:46.190551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:38.276 [2024-10-17 13:14:46.190575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.276 #41 NEW cov: 12502 ft: 15055 corp: 29/1611b lim: 85 exec/s: 41 rss: 74Mb L: 54/85 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:06:38.276 [2024-10-17 13:14:46.240428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:38.276 [2024-10-17 13:14:46.240464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.276 [2024-10-17 13:14:46.240579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:38.276 [2024-10-17 13:14:46.240606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.276 [2024-10-17 13:14:46.240733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:38.276 [2024-10-17 13:14:46.240757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.276 #42 NEW cov: 12502 ft: 15107 corp: 30/1674b lim: 85 exec/s: 42 rss: 74Mb L: 63/85 MS: 1 InsertRepeatedBytes- 00:06:38.276 [2024-10-17 13:14:46.290358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:38.276 [2024-10-17 13:14:46.290387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.276 [2024-10-17 13:14:46.290506] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:38.276 [2024-10-17 13:14:46.290530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.276 #43 NEW cov: 12502 ft: 15142 corp: 31/1708b lim: 85 exec/s: 43 rss: 74Mb L: 34/85 MS: 1 ChangeBit- 00:06:38.536 [2024-10-17 13:14:46.340725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:38.536 [2024-10-17 13:14:46.340759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.536 [2024-10-17 13:14:46.340886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:38.536 [2024-10-17 13:14:46.340912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.536 [2024-10-17 13:14:46.341042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:38.536 [2024-10-17 13:14:46.341068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.536 #44 NEW cov: 12502 ft: 15163 corp: 32/1770b lim: 85 exec/s: 22 rss: 75Mb L: 62/85 MS: 1 EraseBytes- 00:06:38.536 #44 DONE cov: 12502 ft: 15163 corp: 32/1770b lim: 85 exec/s: 22 rss: 75Mb 00:06:38.536 Done 44 runs in 2 second(s) 00:06:38.536 13:14:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:06:38.536 13:14:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:38.536 13:14:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:38.536 13:14:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:06:38.536 13:14:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:06:38.536 13:14:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:38.536 13:14:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:38.536 13:14:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:06:38.536 13:14:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:06:38.536 13:14:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:38.536 13:14:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:38.536 13:14:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:06:38.536 13:14:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:06:38.536 13:14:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:06:38.536 13:14:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:06:38.536 13:14:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:38.536 13:14:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:38.536 13:14:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:38.536 13:14:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:06:38.536 [2024-10-17 13:14:46.534493] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:38.536 [2024-10-17 13:14:46.534562] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849987 ] 00:06:38.796 [2024-10-17 13:14:46.719058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.796 [2024-10-17 13:14:46.753781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.796 [2024-10-17 13:14:46.812604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.796 [2024-10-17 13:14:46.828963] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:06:38.796 INFO: Running with entropic power schedule (0xFF, 100). 00:06:38.796 INFO: Seed: 823661722 00:06:39.055 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:06:39.055 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:06:39.055 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:06:39.055 INFO: A corpus is not provided, starting from an empty corpus 00:06:39.055 #2 INITED exec/s: 0 rss: 65Mb 00:06:39.055 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:39.055 This may also happen if the target rejected all inputs we tried so far 00:06:39.055 [2024-10-17 13:14:46.874439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:39.055 [2024-10-17 13:14:46.874470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.055 [2024-10-17 13:14:46.874511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:39.055 [2024-10-17 13:14:46.874528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.055 [2024-10-17 13:14:46.874588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:39.055 [2024-10-17 13:14:46.874604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:39.315 NEW_FUNC[1/714]: 0x466638 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:06:39.315 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:39.315 #14 NEW cov: 12204 ft: 12202 corp: 2/18b lim: 25 exec/s: 0 rss: 73Mb L: 17/17 MS: 2 CopyPart-InsertRepeatedBytes- 00:06:39.315 [2024-10-17 13:14:47.205367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:39.315 [2024-10-17 13:14:47.205408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.315 [2024-10-17 13:14:47.205477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:39.315 [2024-10-17 13:14:47.205499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.315 [2024-10-17 13:14:47.205566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:39.315 [2024-10-17 13:14:47.205587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:39.315 NEW_FUNC[1/1]: 0x17a13d8 in nvme_ctrlr_get_ready_timeout /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:1288 00:06:39.315 #15 NEW cov: 12322 ft: 12752 corp: 3/35b lim: 25 exec/s: 0 rss: 73Mb L: 17/17 MS: 1 ChangeBinInt- 00:06:39.315 [2024-10-17 13:14:47.265657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:39.315 [2024-10-17 13:14:47.265686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.315 [2024-10-17 13:14:47.265747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:39.315 [2024-10-17 13:14:47.265763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.315 [2024-10-17 13:14:47.265821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:39.315 [2024-10-17 13:14:47.265837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:39.315 [2024-10-17 13:14:47.265897] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:39.315 [2024-10-17 13:14:47.265914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:39.315 [2024-10-17 13:14:47.265975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:06:39.315 [2024-10-17 13:14:47.265995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:39.315 #18 NEW cov: 12328 ft: 13530 corp: 4/60b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:06:39.315 [2024-10-17 13:14:47.305801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:39.315 [2024-10-17 13:14:47.305828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.315 [2024-10-17 13:14:47.305888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:39.315 [2024-10-17 13:14:47.305903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.315 [2024-10-17 13:14:47.305963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:39.315 [2024-10-17 13:14:47.305979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:39.315 [2024-10-17 13:14:47.306037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:39.315 [2024-10-17 13:14:47.306053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:39.315 [2024-10-17 13:14:47.306113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:06:39.315 [2024-10-17 13:14:47.306129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:39.315 #19 NEW cov: 12413 ft: 13803 corp: 5/85b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 ChangeBinInt- 00:06:39.575 [2024-10-17 13:14:47.365965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:39.575 [2024-10-17 13:14:47.365993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.575 [2024-10-17 13:14:47.366053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:39.575 [2024-10-17 13:14:47.366069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.575 [2024-10-17 13:14:47.366129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:39.575 [2024-10-17 13:14:47.366145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:39.575 [2024-10-17 13:14:47.366209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:39.575 [2024-10-17 13:14:47.366224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:39.575 [2024-10-17 13:14:47.366281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:06:39.575 [2024-10-17 13:14:47.366298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:39.575 #20 NEW cov: 12413 ft: 13901 corp: 6/110b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 ChangeByte- 00:06:39.575 [2024-10-17 13:14:47.405816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:39.575 [2024-10-17 13:14:47.405844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.575 [2024-10-17 13:14:47.405895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:39.575 [2024-10-17 13:14:47.405911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.575 [2024-10-17 13:14:47.405968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:39.575 [2024-10-17 13:14:47.405986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:39.575 #21 NEW cov: 12413 ft: 14130 corp: 7/125b lim: 25 exec/s: 0 rss: 73Mb L: 15/25 MS: 1 EraseBytes- 00:06:39.575 [2024-10-17 13:14:47.446070] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:39.575 [2024-10-17 13:14:47.446099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.575 [2024-10-17 13:14:47.446161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:39.575 [2024-10-17 13:14:47.446175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.575 [2024-10-17 13:14:47.446234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:39.575 [2024-10-17 13:14:47.446249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:39.575 [2024-10-17 13:14:47.446307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:39.575 [2024-10-17 13:14:47.446323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:39.575 #22 NEW cov: 12413 ft: 14223 corp: 8/147b lim: 25 exec/s: 0 rss: 73Mb L: 22/25 MS: 1 EraseBytes- 00:06:39.575 [2024-10-17 13:14:47.505963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:39.575 [2024-10-17 13:14:47.505990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.575 [2024-10-17 13:14:47.506032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:39.575 [2024-10-17 13:14:47.506048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.575 #24 NEW cov: 12413 ft: 14519 corp: 9/158b lim: 25 exec/s: 0 rss: 73Mb L: 11/25 MS: 2 CMP-CMP- DE: "\000\001"-"\001\000\000\000\000\000\000\000"- 00:06:39.575 [2024-10-17 13:14:47.546336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:39.575 [2024-10-17 13:14:47.546363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.575 [2024-10-17 13:14:47.546420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:39.575 [2024-10-17 13:14:47.546435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.575 [2024-10-17 13:14:47.546492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:39.575 [2024-10-17 13:14:47.546507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:39.575 [2024-10-17 13:14:47.546565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:39.575 [2024-10-17 13:14:47.546582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:39.575 #25 NEW cov: 12413 ft: 14559 corp: 10/180b lim: 25 exec/s: 0 rss: 73Mb L: 22/25 MS: 1 ChangeBit- 00:06:39.575 [2024-10-17 13:14:47.606625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:39.575 [2024-10-17 13:14:47.606652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.575 [2024-10-17 13:14:47.606712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:39.575 [2024-10-17 13:14:47.606729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.575 [2024-10-17 13:14:47.606791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:39.575 [2024-10-17 13:14:47.606807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:39.575 [2024-10-17 13:14:47.606862] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:39.575 [2024-10-17 13:14:47.606879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:39.575 [2024-10-17 13:14:47.606938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:06:39.575 [2024-10-17 13:14:47.606954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:39.834 #26 NEW cov: 12413 ft: 14594 corp: 11/205b lim: 25 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 CMP- DE: "\377\377\000\024"- 00:06:39.834 [2024-10-17 13:14:47.666657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:39.834 [2024-10-17 13:14:47.666701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.834 [2024-10-17 13:14:47.666757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:39.834 [2024-10-17 13:14:47.666773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.834 [2024-10-17 13:14:47.666832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:39.834 [2024-10-17 13:14:47.666849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:39.834 [2024-10-17 13:14:47.666908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:39.834 [2024-10-17 13:14:47.666925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:39.834 #27 NEW cov: 12413 ft: 14662 corp: 12/226b lim: 25 exec/s: 0 rss: 74Mb L: 21/25 MS: 1 CMP- DE: "\003\000\000\000"- 00:06:39.834 [2024-10-17 13:14:47.706548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:39.834 [2024-10-17 13:14:47.706577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.834 [2024-10-17 13:14:47.706617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:39.834 [2024-10-17 13:14:47.706634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.834 #31 NEW cov: 12413 ft: 14714 corp: 13/236b lim: 25 exec/s: 0 rss: 74Mb L: 10/25 MS: 4 CopyPart-ChangeBit-InsertByte-CMP- DE: "H\000\000\000\000\000\000\000"- 00:06:39.834 [2024-10-17 13:14:47.746929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:39.834 [2024-10-17 13:14:47.746957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.834 [2024-10-17 13:14:47.747009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:39.834 [2024-10-17 13:14:47.747025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.834 [2024-10-17 13:14:47.747085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:39.834 [2024-10-17 13:14:47.747101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:39.834 [2024-10-17 13:14:47.747162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:39.834 [2024-10-17 13:14:47.747182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:39.834 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:39.834 #32 NEW cov: 12436 ft: 14778 corp: 14/258b lim: 25 exec/s: 0 rss: 74Mb L: 22/25 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:39.834 [2024-10-17 13:14:47.786754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:39.834 [2024-10-17 13:14:47.786784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.834 [2024-10-17 13:14:47.786823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:39.834 [2024-10-17 13:14:47.786841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.834 #33 NEW cov: 12436 ft: 14810 corp: 15/268b lim: 25 exec/s: 0 rss: 74Mb L: 10/25 MS: 1 EraseBytes- 00:06:39.834 [2024-10-17 13:14:47.846945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:39.834 [2024-10-17 13:14:47.846973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.834 [2024-10-17 13:14:47.847036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:39.834 [2024-10-17 13:14:47.847051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.834 #34 NEW cov: 12436 ft: 14843 corp: 16/279b lim: 25 exec/s: 34 rss: 74Mb L: 11/25 MS: 1 ShuffleBytes- 00:06:40.093 [2024-10-17 13:14:47.887227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:40.093 [2024-10-17 13:14:47.887255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.093 [2024-10-17 13:14:47.887298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:40.093 [2024-10-17 13:14:47.887313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.093 [2024-10-17 13:14:47.887371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:40.093 [2024-10-17 13:14:47.887388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.093 #35 NEW cov: 12436 ft: 14848 corp: 17/296b lim: 25 exec/s: 35 rss: 74Mb L: 17/25 MS: 1 CrossOver- 00:06:40.093 [2024-10-17 13:14:47.927138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:40.093 [2024-10-17 13:14:47.927170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.093 [2024-10-17 13:14:47.927221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:40.093 [2024-10-17 13:14:47.927238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.093 #36 NEW cov: 12436 ft: 14871 corp: 18/306b lim: 25 exec/s: 36 rss: 74Mb L: 10/25 MS: 1 ChangeBit- 00:06:40.093 [2024-10-17 13:14:47.987607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:40.093 [2024-10-17 13:14:47.987635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.093 [2024-10-17 13:14:47.987693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:40.093 [2024-10-17 13:14:47.987711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.093 [2024-10-17 13:14:47.987770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:40.093 [2024-10-17 13:14:47.987787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.094 [2024-10-17 13:14:47.987844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:40.094 [2024-10-17 13:14:47.987859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.094 #37 NEW cov: 12436 ft: 14883 corp: 19/328b lim: 25 exec/s: 37 rss: 74Mb L: 22/25 MS: 1 ChangeByte- 00:06:40.094 [2024-10-17 13:14:48.047805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:40.094 [2024-10-17 13:14:48.047833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.094 [2024-10-17 13:14:48.047913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:40.094 [2024-10-17 13:14:48.047930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.094 [2024-10-17 13:14:48.047990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:40.094 [2024-10-17 13:14:48.048008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.094 [2024-10-17 13:14:48.048068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:40.094 [2024-10-17 13:14:48.048084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.094 #38 NEW cov: 12436 ft: 14894 corp: 20/352b lim: 25 exec/s: 38 rss: 74Mb L: 24/25 MS: 1 CrossOver- 00:06:40.094 [2024-10-17 13:14:48.107827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:40.094 [2024-10-17 13:14:48.107854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.094 [2024-10-17 13:14:48.107895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:40.094 [2024-10-17 13:14:48.107911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.094 [2024-10-17 13:14:48.107968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:40.094 [2024-10-17 13:14:48.107984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.094 #39 NEW cov: 12436 ft: 14937 corp: 21/370b lim: 25 exec/s: 39 rss: 74Mb L: 18/25 MS: 1 EraseBytes- 00:06:40.354 [2024-10-17 13:14:48.148074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:40.354 [2024-10-17 13:14:48.148102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.354 [2024-10-17 13:14:48.148166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:40.354 [2024-10-17 13:14:48.148183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.354 [2024-10-17 13:14:48.148239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:40.354 [2024-10-17 13:14:48.148255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.354 [2024-10-17 13:14:48.148312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:40.354 [2024-10-17 13:14:48.148329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.354 #40 NEW cov: 12436 ft: 15029 corp: 22/392b lim: 25 exec/s: 40 rss: 74Mb L: 22/25 MS: 1 CrossOver- 00:06:40.354 [2024-10-17 13:14:48.208397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:40.354 [2024-10-17 13:14:48.208425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.354 [2024-10-17 13:14:48.208485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:40.354 [2024-10-17 13:14:48.208501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.354 [2024-10-17 13:14:48.208559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:40.354 [2024-10-17 13:14:48.208575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.354 [2024-10-17 13:14:48.208632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:40.354 [2024-10-17 13:14:48.208648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.354 [2024-10-17 13:14:48.208704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:06:40.354 [2024-10-17 13:14:48.208720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:40.354 #41 NEW cov: 12436 ft: 15050 corp: 23/417b lim: 25 exec/s: 41 rss: 74Mb L: 25/25 MS: 1 ChangeBit- 00:06:40.354 [2024-10-17 13:14:48.268562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:40.354 [2024-10-17 13:14:48.268589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.354 [2024-10-17 13:14:48.268648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:40.354 [2024-10-17 13:14:48.268664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.354 [2024-10-17 13:14:48.268722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:40.354 [2024-10-17 13:14:48.268738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.354 [2024-10-17 13:14:48.268794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:40.354 [2024-10-17 13:14:48.268810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.354 [2024-10-17 13:14:48.268866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:06:40.354 [2024-10-17 13:14:48.268881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:40.354 #42 NEW cov: 12436 ft: 15061 corp: 24/442b lim: 25 exec/s: 42 rss: 74Mb L: 25/25 MS: 1 ChangeBinInt- 00:06:40.354 [2024-10-17 13:14:48.308528] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:40.354 [2024-10-17 13:14:48.308556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.354 [2024-10-17 13:14:48.308618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:40.354 [2024-10-17 13:14:48.308634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.354 [2024-10-17 13:14:48.308705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:40.354 [2024-10-17 13:14:48.308722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.354 [2024-10-17 13:14:48.308785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:40.354 [2024-10-17 13:14:48.308802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.354 #43 NEW cov: 12436 ft: 15080 corp: 25/465b lim: 25 exec/s: 43 rss: 74Mb L: 23/25 MS: 1 InsertByte- 00:06:40.354 [2024-10-17 13:14:48.368442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:40.354 [2024-10-17 13:14:48.368471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.354 [2024-10-17 13:14:48.368511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:40.354 [2024-10-17 13:14:48.368528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.354 #44 NEW cov: 12436 ft: 15090 corp: 26/475b lim: 25 exec/s: 44 rss: 74Mb L: 10/25 MS: 1 ShuffleBytes- 00:06:40.613 [2024-10-17 13:14:48.408609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:40.613 [2024-10-17 13:14:48.408636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.613 [2024-10-17 13:14:48.408678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:40.613 [2024-10-17 13:14:48.408695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.613 #45 NEW cov: 12436 ft: 15104 corp: 27/486b lim: 25 exec/s: 45 rss: 74Mb L: 11/25 MS: 1 ChangeBinInt- 00:06:40.613 [2024-10-17 13:14:48.448930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:40.613 [2024-10-17 13:14:48.448958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.613 [2024-10-17 13:14:48.449018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:40.613 [2024-10-17 13:14:48.449035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.613 [2024-10-17 13:14:48.449096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:40.613 [2024-10-17 13:14:48.449112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.613 [2024-10-17 13:14:48.449175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:40.613 [2024-10-17 13:14:48.449190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.613 #46 NEW cov: 12436 ft: 15150 corp: 28/509b lim: 25 exec/s: 46 rss: 74Mb L: 23/25 MS: 1 InsertByte- 00:06:40.613 [2024-10-17 13:14:48.489071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:40.614 [2024-10-17 13:14:48.489100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.614 [2024-10-17 13:14:48.489166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:40.614 [2024-10-17 13:14:48.489182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.614 [2024-10-17 13:14:48.489244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:40.614 [2024-10-17 13:14:48.489260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.614 [2024-10-17 13:14:48.489319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:40.614 [2024-10-17 13:14:48.489339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.614 #47 NEW cov: 12436 ft: 15160 corp: 29/531b lim: 25 exec/s: 47 rss: 74Mb L: 22/25 MS: 1 ChangeBit- 00:06:40.614 [2024-10-17 13:14:48.529201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:40.614 [2024-10-17 13:14:48.529229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.614 [2024-10-17 13:14:48.529287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:40.614 [2024-10-17 13:14:48.529305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.614 [2024-10-17 13:14:48.529361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:40.614 [2024-10-17 13:14:48.529378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.614 [2024-10-17 13:14:48.529436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:40.614 [2024-10-17 13:14:48.529453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.614 #48 NEW cov: 12436 ft: 15161 corp: 30/555b lim: 25 exec/s: 48 rss: 74Mb L: 24/25 MS: 1 CopyPart- 00:06:40.614 [2024-10-17 13:14:48.589546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:40.614 [2024-10-17 13:14:48.589573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.614 [2024-10-17 13:14:48.589649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:40.614 [2024-10-17 13:14:48.589665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.614 [2024-10-17 13:14:48.589723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:40.614 [2024-10-17 13:14:48.589738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.614 [2024-10-17 13:14:48.589796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:40.614 [2024-10-17 13:14:48.589810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.614 [2024-10-17 13:14:48.589868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:06:40.614 [2024-10-17 13:14:48.589884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:40.614 #49 NEW cov: 12436 ft: 15197 corp: 31/580b lim: 25 exec/s: 49 rss: 75Mb L: 25/25 MS: 1 ChangeByte- 00:06:40.614 [2024-10-17 13:14:48.649566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:40.614 [2024-10-17 13:14:48.649595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.614 [2024-10-17 13:14:48.649655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:40.614 [2024-10-17 13:14:48.649671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.614 [2024-10-17 13:14:48.649729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:40.614 [2024-10-17 13:14:48.649745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.614 [2024-10-17 13:14:48.649805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:40.614 [2024-10-17 13:14:48.649826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.874 #50 NEW cov: 12436 ft: 15223 corp: 32/603b lim: 25 exec/s: 50 rss: 75Mb L: 23/25 MS: 1 ChangeByte- 00:06:40.874 [2024-10-17 13:14:48.709879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:40.874 [2024-10-17 13:14:48.709907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.874 [2024-10-17 13:14:48.709966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:40.874 [2024-10-17 13:14:48.709982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.874 [2024-10-17 13:14:48.710039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:40.874 [2024-10-17 13:14:48.710054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.874 [2024-10-17 13:14:48.710108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:40.874 [2024-10-17 13:14:48.710123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.874 [2024-10-17 13:14:48.710201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:06:40.874 [2024-10-17 13:14:48.710219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:40.874 #51 NEW cov: 12436 ft: 15231 corp: 33/628b lim: 25 exec/s: 51 rss: 75Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:06:40.874 [2024-10-17 13:14:48.749797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:40.874 [2024-10-17 13:14:48.749824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.874 [2024-10-17 13:14:48.749884] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:40.874 [2024-10-17 13:14:48.749900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.874 [2024-10-17 13:14:48.749961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:40.874 [2024-10-17 13:14:48.749977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.874 [2024-10-17 13:14:48.750038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:40.874 [2024-10-17 13:14:48.750054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.874 #52 NEW cov: 12436 ft: 15245 corp: 34/648b lim: 25 exec/s: 52 rss: 75Mb L: 20/25 MS: 1 EraseBytes- 00:06:40.874 [2024-10-17 13:14:48.789645] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:40.874 [2024-10-17 13:14:48.789673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.874 [2024-10-17 13:14:48.789726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:40.874 [2024-10-17 13:14:48.789742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.874 #53 NEW cov: 12436 ft: 15260 corp: 35/658b lim: 25 exec/s: 53 rss: 75Mb L: 10/25 MS: 1 ChangeByte- 00:06:40.874 [2024-10-17 13:14:48.849836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:40.874 [2024-10-17 13:14:48.849863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.874 [2024-10-17 13:14:48.849928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:40.874 [2024-10-17 13:14:48.849945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.874 #54 NEW cov: 12436 ft: 15262 corp: 36/668b lim: 25 exec/s: 27 rss: 75Mb L: 10/25 MS: 1 PersAutoDict- DE: "\003\000\000\000"- 00:06:40.874 #54 DONE cov: 12436 ft: 15262 corp: 36/668b lim: 25 exec/s: 27 rss: 75Mb 00:06:40.874 ###### Recommended dictionary. ###### 00:06:40.874 "\000\001" # Uses: 0 00:06:40.874 "\001\000\000\000\000\000\000\000" # Uses: 1 00:06:40.874 "\377\377\000\024" # Uses: 0 00:06:40.874 "\003\000\000\000" # Uses: 1 00:06:40.874 "H\000\000\000\000\000\000\000" # Uses: 0 00:06:40.874 ###### End of recommended dictionary. ###### 00:06:40.874 Done 54 runs in 2 second(s) 00:06:41.134 13:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:06:41.134 13:14:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:41.134 13:14:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:41.134 13:14:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:06:41.134 13:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:06:41.134 13:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:41.134 13:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:41.134 13:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:06:41.134 13:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:06:41.134 13:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:41.134 13:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:41.134 13:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:06:41.134 13:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:06:41.134 13:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:06:41.134 13:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:06:41.134 13:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:41.134 13:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:41.134 13:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:41.134 13:14:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:06:41.135 [2024-10-17 13:14:49.020978] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:41.135 [2024-10-17 13:14:49.021053] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3850383 ] 00:06:41.393 [2024-10-17 13:14:49.199866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.393 [2024-10-17 13:14:49.234101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.393 [2024-10-17 13:14:49.293052] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.393 [2024-10-17 13:14:49.309439] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:06:41.393 INFO: Running with entropic power schedule (0xFF, 100). 00:06:41.393 INFO: Seed: 3302674259 00:06:41.393 INFO: Loaded 1 modules (384632 inline 8-bit counters): 384632 [0x2bf274c, 0x2c505c4), 00:06:41.393 INFO: Loaded 1 PC tables (384632 PCs): 384632 [0x2c505c8,0x322ed48), 00:06:41.393 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:06:41.393 INFO: A corpus is not provided, starting from an empty corpus 00:06:41.393 #2 INITED exec/s: 0 rss: 65Mb 00:06:41.393 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:41.393 This may also happen if the target rejected all inputs we tried so far 00:06:41.393 [2024-10-17 13:14:49.380001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2315255808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.393 [2024-10-17 13:14:49.380038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.393 [2024-10-17 13:14:49.380165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.393 [2024-10-17 13:14:49.380189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:41.393 [2024-10-17 13:14:49.380317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.393 [2024-10-17 13:14:49.380338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:41.652 NEW_FUNC[1/716]: 0x467728 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:06:41.652 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:41.652 #4 NEW cov: 12281 ft: 12281 corp: 2/64b lim: 100 exec/s: 0 rss: 73Mb L: 63/63 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:41.910 [2024-10-17 13:14:49.730612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2315255808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.910 [2024-10-17 13:14:49.730658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.910 [2024-10-17 13:14:49.730788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.910 [2024-10-17 13:14:49.730813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:41.910 #5 NEW cov: 12394 ft: 13320 corp: 3/120b lim: 100 exec/s: 0 rss: 73Mb L: 56/63 MS: 1 EraseBytes- 00:06:41.910 [2024-10-17 13:14:49.801038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2315255808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.910 [2024-10-17 13:14:49.801075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.910 [2024-10-17 13:14:49.801209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.910 [2024-10-17 13:14:49.801237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:41.910 [2024-10-17 13:14:49.801358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.910 [2024-10-17 13:14:49.801382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:41.910 #6 NEW cov: 12400 ft: 13521 corp: 4/183b lim: 100 exec/s: 0 rss: 73Mb L: 63/63 MS: 1 ShuffleBytes- 00:06:41.910 [2024-10-17 13:14:49.851179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2315255808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.910 [2024-10-17 13:14:49.851214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.910 [2024-10-17 13:14:49.851315] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.910 [2024-10-17 13:14:49.851333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:41.910 [2024-10-17 13:14:49.851443] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.910 [2024-10-17 13:14:49.851470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:41.910 #7 NEW cov: 12485 ft: 13786 corp: 5/243b lim: 100 exec/s: 0 rss: 73Mb L: 60/63 MS: 1 CopyPart- 00:06:41.910 [2024-10-17 13:14:49.921585] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2315255808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.910 [2024-10-17 13:14:49.921617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.910 [2024-10-17 13:14:49.921694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.910 [2024-10-17 13:14:49.921716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:41.910 [2024-10-17 13:14:49.921829] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.910 [2024-10-17 13:14:49.921854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:41.910 [2024-10-17 13:14:49.921975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.910 [2024-10-17 13:14:49.921998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:41.910 #8 NEW cov: 12485 ft: 14154 corp: 6/324b lim: 100 exec/s: 0 rss: 73Mb L: 81/81 MS: 1 CrossOver- 00:06:42.169 [2024-10-17 13:14:49.971280] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:242833424384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.169 [2024-10-17 13:14:49.971323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.169 [2024-10-17 13:14:49.971451] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.169 [2024-10-17 13:14:49.971476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.169 #9 NEW cov: 12485 ft: 14239 corp: 7/380b lim: 100 exec/s: 0 rss: 73Mb L: 56/81 MS: 1 ChangeBinInt- 00:06:42.169 [2024-10-17 13:14:50.021266] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.169 [2024-10-17 13:14:50.021294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.169 #15 NEW cov: 12485 ft: 15117 corp: 8/417b lim: 100 exec/s: 0 rss: 73Mb L: 37/81 MS: 1 InsertRepeatedBytes- 00:06:42.169 [2024-10-17 13:14:50.071851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2315255808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.169 [2024-10-17 13:14:50.071890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.169 [2024-10-17 13:14:50.071982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.169 [2024-10-17 13:14:50.072005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.169 [2024-10-17 13:14:50.072129] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.169 [2024-10-17 13:14:50.072197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:42.169 #16 NEW cov: 12485 ft: 15183 corp: 9/480b lim: 100 exec/s: 0 rss: 73Mb L: 63/81 MS: 1 ChangeBinInt- 00:06:42.169 [2024-10-17 13:14:50.132051] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1101826883584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.169 [2024-10-17 13:14:50.132086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.169 [2024-10-17 13:14:50.132187] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.169 [2024-10-17 13:14:50.132212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.169 [2024-10-17 13:14:50.132334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.169 [2024-10-17 13:14:50.132354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:42.169 #17 NEW cov: 12485 ft: 15283 corp: 10/540b lim: 100 exec/s: 0 rss: 73Mb L: 60/81 MS: 1 ChangeBinInt- 00:06:42.169 [2024-10-17 13:14:50.202271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1101826883584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.169 [2024-10-17 13:14:50.202305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.169 [2024-10-17 13:14:50.202419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.169 [2024-10-17 13:14:50.202440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.169 [2024-10-17 13:14:50.202565] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.169 [2024-10-17 13:14:50.202587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:42.428 NEW_FUNC[1/1]: 0x1bff788 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:42.428 #18 NEW cov: 12508 ft: 15329 corp: 11/600b lim: 100 exec/s: 0 rss: 74Mb L: 60/81 MS: 1 CopyPart- 00:06:42.428 [2024-10-17 13:14:50.272656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2315255808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.428 [2024-10-17 13:14:50.272692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.428 [2024-10-17 13:14:50.272796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.428 [2024-10-17 13:14:50.272821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.428 [2024-10-17 13:14:50.272942] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.428 [2024-10-17 13:14:50.272967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:42.428 [2024-10-17 13:14:50.273089] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.428 [2024-10-17 13:14:50.273112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:42.428 #19 NEW cov: 12508 ft: 15417 corp: 12/691b lim: 100 exec/s: 0 rss: 74Mb L: 91/91 MS: 1 InsertRepeatedBytes- 00:06:42.428 [2024-10-17 13:14:50.322905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2315255808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.428 [2024-10-17 13:14:50.322937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.428 [2024-10-17 13:14:50.322999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.428 [2024-10-17 13:14:50.323024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.428 [2024-10-17 13:14:50.323147] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.428 [2024-10-17 13:14:50.323171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:42.428 [2024-10-17 13:14:50.323290] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.428 [2024-10-17 13:14:50.323313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:42.428 #20 NEW cov: 12508 ft: 15447 corp: 13/784b lim: 100 exec/s: 20 rss: 74Mb L: 93/93 MS: 1 CopyPart- 00:06:42.428 [2024-10-17 13:14:50.392833] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:242833424384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.428 [2024-10-17 13:14:50.392866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.428 [2024-10-17 13:14:50.392965] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.428 [2024-10-17 13:14:50.392988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.428 [2024-10-17 13:14:50.393108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.428 [2024-10-17 13:14:50.393135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:42.428 #21 NEW cov: 12508 ft: 15476 corp: 14/855b lim: 100 exec/s: 21 rss: 74Mb L: 71/93 MS: 1 CopyPart- 00:06:42.428 [2024-10-17 13:14:50.463004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2315255808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.428 [2024-10-17 13:14:50.463036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.428 [2024-10-17 13:14:50.463129] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.428 [2024-10-17 13:14:50.463157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.428 [2024-10-17 13:14:50.463287] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.428 [2024-10-17 13:14:50.463312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:42.687 #22 NEW cov: 12508 ft: 15510 corp: 15/918b lim: 100 exec/s: 22 rss: 74Mb L: 63/93 MS: 1 CopyPart- 00:06:42.687 [2024-10-17 13:14:50.512622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2315255808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.687 [2024-10-17 13:14:50.512656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.687 #24 NEW cov: 12508 ft: 15561 corp: 16/942b lim: 100 exec/s: 24 rss: 74Mb L: 24/93 MS: 2 CrossOver-CopyPart- 00:06:42.687 [2024-10-17 13:14:50.563100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:242833424384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.687 [2024-10-17 13:14:50.563130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.687 [2024-10-17 13:14:50.563253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.687 [2024-10-17 13:14:50.563276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.687 #25 NEW cov: 12508 ft: 15585 corp: 17/998b lim: 100 exec/s: 25 rss: 74Mb L: 56/93 MS: 1 ChangeByte- 00:06:42.687 [2024-10-17 13:14:50.613492] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2315255808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.687 [2024-10-17 13:14:50.613528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.687 [2024-10-17 13:14:50.613626] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.687 [2024-10-17 13:14:50.613649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.687 [2024-10-17 13:14:50.613773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.687 [2024-10-17 13:14:50.613798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:42.687 #26 NEW cov: 12508 ft: 15629 corp: 18/1061b lim: 100 exec/s: 26 rss: 74Mb L: 63/93 MS: 1 ShuffleBytes- 00:06:42.687 [2024-10-17 13:14:50.663305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2315255808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.687 [2024-10-17 13:14:50.663339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.687 [2024-10-17 13:14:50.663465] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4294967296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.687 [2024-10-17 13:14:50.663490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.687 #27 NEW cov: 12508 ft: 15661 corp: 19/1117b lim: 100 exec/s: 27 rss: 74Mb L: 56/93 MS: 1 ChangeBit- 00:06:42.687 [2024-10-17 13:14:50.714057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2315255808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.687 [2024-10-17 13:14:50.714088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.687 [2024-10-17 13:14:50.714203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.687 [2024-10-17 13:14:50.714225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.687 [2024-10-17 13:14:50.714344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.687 [2024-10-17 13:14:50.714369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:42.687 [2024-10-17 13:14:50.714479] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.687 [2024-10-17 13:14:50.714502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:42.687 #28 NEW cov: 12508 ft: 15675 corp: 20/1208b lim: 100 exec/s: 28 rss: 74Mb L: 91/93 MS: 1 ShuffleBytes- 00:06:42.946 [2024-10-17 13:14:50.763359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3489660928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.946 [2024-10-17 13:14:50.763389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.946 #32 NEW cov: 12508 ft: 15679 corp: 21/1228b lim: 100 exec/s: 32 rss: 74Mb L: 20/93 MS: 4 ChangeByte-InsertByte-ShuffleBytes-CrossOver- 00:06:42.946 [2024-10-17 13:14:50.814102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2315255808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.946 [2024-10-17 13:14:50.814135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.946 [2024-10-17 13:14:50.814252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.946 [2024-10-17 13:14:50.814277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.946 [2024-10-17 13:14:50.814415] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.946 [2024-10-17 13:14:50.814439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:42.946 #33 NEW cov: 12508 ft: 15764 corp: 22/1295b lim: 100 exec/s: 33 rss: 74Mb L: 67/93 MS: 1 CMP- DE: "\010\000\000\000"- 00:06:42.946 [2024-10-17 13:14:50.863650] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:595020742656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.946 [2024-10-17 13:14:50.863678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.946 #34 NEW cov: 12508 ft: 15797 corp: 23/1326b lim: 100 exec/s: 34 rss: 74Mb L: 31/93 MS: 1 CrossOver- 00:06:42.946 [2024-10-17 13:14:50.913964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:242833424384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.946 [2024-10-17 13:14:50.913991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.946 #35 NEW cov: 12508 ft: 15853 corp: 24/1363b lim: 100 exec/s: 35 rss: 74Mb L: 37/93 MS: 1 EraseBytes- 00:06:42.946 [2024-10-17 13:14:50.984612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2315255808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.946 [2024-10-17 13:14:50.984648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.946 [2024-10-17 13:14:50.984744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.946 [2024-10-17 13:14:50.984762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.946 [2024-10-17 13:14:50.984889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:34359738368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.946 [2024-10-17 13:14:50.984911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.205 #36 NEW cov: 12508 ft: 15871 corp: 25/1423b lim: 100 exec/s: 36 rss: 74Mb L: 60/93 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:06:43.205 [2024-10-17 13:14:51.034777] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1101826883584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.205 [2024-10-17 13:14:51.034810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.205 [2024-10-17 13:14:51.034939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.205 [2024-10-17 13:14:51.034962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.205 [2024-10-17 13:14:51.035091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.205 [2024-10-17 13:14:51.035117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.205 #37 NEW cov: 12508 ft: 15896 corp: 26/1483b lim: 100 exec/s: 37 rss: 74Mb L: 60/93 MS: 1 ChangeBit- 00:06:43.205 [2024-10-17 13:14:51.104490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:242833424384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.205 [2024-10-17 13:14:51.104521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.205 #38 NEW cov: 12508 ft: 15980 corp: 27/1520b lim: 100 exec/s: 38 rss: 74Mb L: 37/93 MS: 1 ChangeBinInt- 00:06:43.205 [2024-10-17 13:14:51.175496] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2315255808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.205 [2024-10-17 13:14:51.175531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.205 [2024-10-17 13:14:51.175606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.205 [2024-10-17 13:14:51.175632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.205 [2024-10-17 13:14:51.175755] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.205 [2024-10-17 13:14:51.175779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.205 [2024-10-17 13:14:51.175900] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.205 [2024-10-17 13:14:51.175925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:43.205 #39 NEW cov: 12508 ft: 16000 corp: 28/1611b lim: 100 exec/s: 39 rss: 74Mb L: 91/93 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\017"- 00:06:43.205 [2024-10-17 13:14:51.225584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2315255808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.205 [2024-10-17 13:14:51.225616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.206 [2024-10-17 13:14:51.225698] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.206 [2024-10-17 13:14:51.225721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.206 [2024-10-17 13:14:51.225834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.206 [2024-10-17 13:14:51.225857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.206 [2024-10-17 13:14:51.225982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.206 [2024-10-17 13:14:51.226007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:43.465 #40 NEW cov: 12508 ft: 16056 corp: 29/1692b lim: 100 exec/s: 40 rss: 74Mb L: 81/93 MS: 1 ChangeBinInt- 00:06:43.465 [2024-10-17 13:14:51.295373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1101826883584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.465 [2024-10-17 13:14:51.295400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.465 [2024-10-17 13:14:51.295513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.465 [2024-10-17 13:14:51.295535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.465 #41 NEW cov: 12508 ft: 16072 corp: 30/1734b lim: 100 exec/s: 41 rss: 74Mb L: 42/93 MS: 1 EraseBytes- 00:06:43.465 [2024-10-17 13:14:51.365176] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2315255808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.465 [2024-10-17 13:14:51.365201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.465 #42 NEW cov: 12508 ft: 16099 corp: 31/1769b lim: 100 exec/s: 21 rss: 75Mb L: 35/93 MS: 1 EraseBytes- 00:06:43.465 #42 DONE cov: 12508 ft: 16099 corp: 31/1769b lim: 100 exec/s: 21 rss: 75Mb 00:06:43.465 ###### Recommended dictionary. ###### 00:06:43.465 "\010\000\000\000" # Uses: 1 00:06:43.465 "\377\377\377\377\377\377\377\017" # Uses: 0 00:06:43.465 ###### End of recommended dictionary. ###### 00:06:43.465 Done 42 runs in 2 second(s) 00:06:43.465 13:14:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:06:43.465 13:14:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:43.465 13:14:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:43.465 13:14:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:06:43.465 00:06:43.465 real 1m3.021s 00:06:43.465 user 1m39.711s 00:06:43.465 sys 0m7.062s 00:06:43.465 13:14:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.465 13:14:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:43.465 ************************************ 00:06:43.465 END TEST nvmf_llvm_fuzz 00:06:43.465 ************************************ 00:06:43.723 13:14:51 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:43.723 13:14:51 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:43.723 13:14:51 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:06:43.723 13:14:51 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.723 13:14:51 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.723 13:14:51 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:43.723 ************************************ 00:06:43.723 START TEST vfio_llvm_fuzz 00:06:43.723 ************************************ 00:06:43.723 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:06:43.723 * Looking for test storage... 00:06:43.724 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:06:43.724 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:43.724 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:06:43.724 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:43.724 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:43.724 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.724 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.724 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.724 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.986 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:43.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.987 --rc genhtml_branch_coverage=1 00:06:43.987 --rc genhtml_function_coverage=1 00:06:43.987 --rc genhtml_legend=1 00:06:43.987 --rc geninfo_all_blocks=1 00:06:43.987 --rc geninfo_unexecuted_blocks=1 00:06:43.987 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:43.987 ' 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:43.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.987 --rc genhtml_branch_coverage=1 00:06:43.987 --rc genhtml_function_coverage=1 00:06:43.987 --rc genhtml_legend=1 00:06:43.987 --rc geninfo_all_blocks=1 00:06:43.987 --rc geninfo_unexecuted_blocks=1 00:06:43.987 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:43.987 ' 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:43.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.987 --rc genhtml_branch_coverage=1 00:06:43.987 --rc genhtml_function_coverage=1 00:06:43.987 --rc genhtml_legend=1 00:06:43.987 --rc geninfo_all_blocks=1 00:06:43.987 --rc geninfo_unexecuted_blocks=1 00:06:43.987 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:43.987 ' 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:43.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.987 --rc genhtml_branch_coverage=1 00:06:43.987 --rc genhtml_function_coverage=1 00:06:43.987 --rc genhtml_legend=1 00:06:43.987 --rc geninfo_all_blocks=1 00:06:43.987 --rc geninfo_unexecuted_blocks=1 00:06:43.987 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:43.987 ' 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER=y 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:06:43.987 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_SHARED=n 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_FC=n 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_URING=n 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:43.988 #define SPDK_CONFIG_H 00:06:43.988 #define SPDK_CONFIG_AIO_FSDEV 1 00:06:43.988 #define SPDK_CONFIG_APPS 1 00:06:43.988 #define SPDK_CONFIG_ARCH native 00:06:43.988 #undef SPDK_CONFIG_ASAN 00:06:43.988 #undef SPDK_CONFIG_AVAHI 00:06:43.988 #undef SPDK_CONFIG_CET 00:06:43.988 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:06:43.988 #define SPDK_CONFIG_COVERAGE 1 00:06:43.988 #define SPDK_CONFIG_CROSS_PREFIX 00:06:43.988 #undef SPDK_CONFIG_CRYPTO 00:06:43.988 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:43.988 #undef SPDK_CONFIG_CUSTOMOCF 00:06:43.988 #undef SPDK_CONFIG_DAOS 00:06:43.988 #define SPDK_CONFIG_DAOS_DIR 00:06:43.988 #define SPDK_CONFIG_DEBUG 1 00:06:43.988 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:43.988 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:43.988 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:43.988 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:43.988 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:43.988 #undef SPDK_CONFIG_DPDK_UADK 00:06:43.988 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:43.988 #define SPDK_CONFIG_EXAMPLES 1 00:06:43.988 #undef SPDK_CONFIG_FC 00:06:43.988 #define SPDK_CONFIG_FC_PATH 00:06:43.988 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:43.988 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:43.988 #define SPDK_CONFIG_FSDEV 1 00:06:43.988 #undef SPDK_CONFIG_FUSE 00:06:43.988 #define SPDK_CONFIG_FUZZER 1 00:06:43.988 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:43.988 #undef SPDK_CONFIG_GOLANG 00:06:43.988 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:43.988 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:43.988 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:43.988 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:43.988 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:43.988 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:43.988 #undef SPDK_CONFIG_HAVE_LZ4 00:06:43.988 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:06:43.988 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:06:43.988 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:43.988 #define SPDK_CONFIG_IDXD 1 00:06:43.988 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:43.988 #undef SPDK_CONFIG_IPSEC_MB 00:06:43.988 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:43.988 #define SPDK_CONFIG_ISAL 1 00:06:43.988 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:43.988 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:43.988 #define SPDK_CONFIG_LIBDIR 00:06:43.988 #undef SPDK_CONFIG_LTO 00:06:43.988 #define SPDK_CONFIG_MAX_LCORES 128 00:06:43.988 #define SPDK_CONFIG_NVME_CUSE 1 00:06:43.988 #undef SPDK_CONFIG_OCF 00:06:43.988 #define SPDK_CONFIG_OCF_PATH 00:06:43.988 #define SPDK_CONFIG_OPENSSL_PATH 00:06:43.988 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:43.988 #define SPDK_CONFIG_PGO_DIR 00:06:43.988 #undef SPDK_CONFIG_PGO_USE 00:06:43.988 #define SPDK_CONFIG_PREFIX /usr/local 00:06:43.988 #undef SPDK_CONFIG_RAID5F 00:06:43.988 #undef SPDK_CONFIG_RBD 00:06:43.988 #define SPDK_CONFIG_RDMA 1 00:06:43.988 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:43.988 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:43.988 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:43.988 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:43.988 #undef SPDK_CONFIG_SHARED 00:06:43.988 #undef SPDK_CONFIG_SMA 00:06:43.988 #define SPDK_CONFIG_TESTS 1 00:06:43.988 #undef SPDK_CONFIG_TSAN 00:06:43.988 #define SPDK_CONFIG_UBLK 1 00:06:43.988 #define SPDK_CONFIG_UBSAN 1 00:06:43.988 #undef SPDK_CONFIG_UNIT_TESTS 00:06:43.988 #undef SPDK_CONFIG_URING 00:06:43.988 #define SPDK_CONFIG_URING_PATH 00:06:43.988 #undef SPDK_CONFIG_URING_ZNS 00:06:43.988 #undef SPDK_CONFIG_USDT 00:06:43.988 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:43.988 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:43.988 #define SPDK_CONFIG_VFIO_USER 1 00:06:43.988 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:43.988 #define SPDK_CONFIG_VHOST 1 00:06:43.988 #define SPDK_CONFIG_VIRTIO 1 00:06:43.988 #undef SPDK_CONFIG_VTUNE 00:06:43.988 #define SPDK_CONFIG_VTUNE_DIR 00:06:43.988 #define SPDK_CONFIG_WERROR 1 00:06:43.988 #define SPDK_CONFIG_WPDK_DIR 00:06:43.988 #undef SPDK_CONFIG_XNVME 00:06:43.988 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:06:43.988 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:43.989 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j112 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 3850959 ]] 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 3850959 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.mxzQ2u 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:06:43.990 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.mxzQ2u/tests/vfio /tmp/spdk.mxzQ2u 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=607576064 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4676853760 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=52998811648 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=61730627584 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=8731815936 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=30860550144 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865313792 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4763648 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=12340133888 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=12346126336 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5992448 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=30864261120 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865313792 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=1052672 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=6173048832 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=6173061120 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:06:43.991 * Looking for test storage... 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=52998811648 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=10946408448 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:06:43.991 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1683 -- # true 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:06:43.991 13:14:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:43.991 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:43.991 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.991 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.991 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.991 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.991 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.991 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.991 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.991 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.991 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.991 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.991 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.991 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:43.991 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:43.991 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.991 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.991 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:44.251 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:44.251 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.251 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:44.251 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.251 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:44.251 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:44.251 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.251 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:44.251 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.251 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.251 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.251 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:44.251 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.251 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:44.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.251 --rc genhtml_branch_coverage=1 00:06:44.251 --rc genhtml_function_coverage=1 00:06:44.251 --rc genhtml_legend=1 00:06:44.251 --rc geninfo_all_blocks=1 00:06:44.251 --rc geninfo_unexecuted_blocks=1 00:06:44.251 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:44.251 ' 00:06:44.251 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:44.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.251 --rc genhtml_branch_coverage=1 00:06:44.251 --rc genhtml_function_coverage=1 00:06:44.251 --rc genhtml_legend=1 00:06:44.251 --rc geninfo_all_blocks=1 00:06:44.251 --rc geninfo_unexecuted_blocks=1 00:06:44.251 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:44.251 ' 00:06:44.251 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:44.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.251 --rc genhtml_branch_coverage=1 00:06:44.251 --rc genhtml_function_coverage=1 00:06:44.251 --rc genhtml_legend=1 00:06:44.251 --rc geninfo_all_blocks=1 00:06:44.251 --rc geninfo_unexecuted_blocks=1 00:06:44.251 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:44.251 ' 00:06:44.251 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:44.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.251 --rc genhtml_branch_coverage=1 00:06:44.251 --rc genhtml_function_coverage=1 00:06:44.251 --rc genhtml_legend=1 00:06:44.251 --rc geninfo_all_blocks=1 00:06:44.251 --rc geninfo_unexecuted_blocks=1 00:06:44.251 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:44.251 ' 00:06:44.251 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:06:44.251 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:06:44.251 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:06:44.252 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:06:44.252 13:14:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:06:44.252 [2024-10-17 13:14:52.102659] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:44.252 [2024-10-17 13:14:52.102745] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3851015 ] 00:06:44.252 [2024-10-17 13:14:52.174538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.252 [2024-10-17 13:14:52.215975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.511 INFO: Running with entropic power schedule (0xFF, 100). 00:06:44.511 INFO: Seed: 2082682665 00:06:44.511 INFO: Loaded 1 modules (381868 inline 8-bit counters): 381868 [0x2bb2f4c, 0x2c102f8), 00:06:44.511 INFO: Loaded 1 PC tables (381868 PCs): 381868 [0x2c102f8,0x31e3db8), 00:06:44.511 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:06:44.511 INFO: A corpus is not provided, starting from an empty corpus 00:06:44.511 #2 INITED exec/s: 0 rss: 66Mb 00:06:44.511 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:44.511 This may also happen if the target rejected all inputs we tried so far 00:06:44.511 [2024-10-17 13:14:52.457304] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:06:45.030 NEW_FUNC[1/653]: 0x43b5e8 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:06:45.030 NEW_FUNC[2/653]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:06:45.030 #29 NEW cov: 10644 ft: 11061 corp: 2/7b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:45.289 NEW_FUNC[1/18]: 0x188e528 in nvme_pcie_qpair_complete_tracker /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_pcie_common.c:673 00:06:45.289 NEW_FUNC[2/18]: 0x189a168 in nvme_pcie_qpair_ring_cq_doorbell /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_pcie_internal.h:279 00:06:45.289 #35 NEW cov: 11151 ft: 14771 corp: 3/13b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 CrossOver- 00:06:45.289 NEW_FUNC[1/1]: 0x1bcbbd8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:45.289 #36 NEW cov: 11168 ft: 16623 corp: 4/19b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 ChangeByte- 00:06:45.548 #37 NEW cov: 11168 ft: 16866 corp: 5/25b lim: 6 exec/s: 37 rss: 75Mb L: 6/6 MS: 1 CopyPart- 00:06:45.807 #47 NEW cov: 11168 ft: 17064 corp: 6/31b lim: 6 exec/s: 47 rss: 75Mb L: 6/6 MS: 5 EraseBytes-ChangeBit-ShuffleBytes-CrossOver-CrossOver- 00:06:45.807 #48 NEW cov: 11168 ft: 17691 corp: 7/37b lim: 6 exec/s: 48 rss: 75Mb L: 6/6 MS: 1 CopyPart- 00:06:46.066 #49 NEW cov: 11168 ft: 17939 corp: 8/43b lim: 6 exec/s: 49 rss: 75Mb L: 6/6 MS: 1 ShuffleBytes- 00:06:46.325 #50 NEW cov: 11168 ft: 18366 corp: 9/49b lim: 6 exec/s: 50 rss: 76Mb L: 6/6 MS: 1 CopyPart- 00:06:46.585 #56 NEW cov: 11175 ft: 18419 corp: 10/55b lim: 6 exec/s: 56 rss: 76Mb L: 6/6 MS: 1 ChangeByte- 00:06:46.585 #61 NEW cov: 11182 ft: 18672 corp: 11/61b lim: 6 exec/s: 30 rss: 76Mb L: 6/6 MS: 5 EraseBytes-CrossOver-EraseBytes-CrossOver-CrossOver- 00:06:46.585 #61 DONE cov: 11182 ft: 18672 corp: 11/61b lim: 6 exec/s: 30 rss: 76Mb 00:06:46.585 Done 61 runs in 2 second(s) 00:06:46.585 [2024-10-17 13:14:54.602342] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:06:46.845 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:06:46.845 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:46.845 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:46.845 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:46.845 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:06:46.845 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:06:46.845 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:06:46.845 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:06:46.845 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:06:46.845 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:06:46.845 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:06:46.845 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:06:46.845 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:06:46.845 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:06:46.845 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:06:46.845 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:06:46.845 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:06:46.845 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:46.845 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:06:46.845 13:14:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:06:46.845 [2024-10-17 13:14:54.868514] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:46.845 [2024-10-17 13:14:54.868610] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3851553 ] 00:06:47.104 [2024-10-17 13:14:54.940994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.104 [2024-10-17 13:14:54.980578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.364 INFO: Running with entropic power schedule (0xFF, 100). 00:06:47.364 INFO: Seed: 559716774 00:06:47.364 INFO: Loaded 1 modules (381868 inline 8-bit counters): 381868 [0x2bb2f4c, 0x2c102f8), 00:06:47.364 INFO: Loaded 1 PC tables (381868 PCs): 381868 [0x2c102f8,0x31e3db8), 00:06:47.364 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:06:47.364 INFO: A corpus is not provided, starting from an empty corpus 00:06:47.364 #2 INITED exec/s: 0 rss: 67Mb 00:06:47.364 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:47.364 This may also happen if the target rejected all inputs we tried so far 00:06:47.364 [2024-10-17 13:14:55.221574] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:06:47.364 [2024-10-17 13:14:55.271223] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:06:47.364 [2024-10-17 13:14:55.271250] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:06:47.364 [2024-10-17 13:14:55.271268] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:06:47.623 NEW_FUNC[1/673]: 0x43bb88 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:06:47.623 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:06:47.623 #6 NEW cov: 11131 ft: 11096 corp: 2/5b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 4 ChangeByte-ChangeBit-InsertByte-CMP- DE: "\002\000"- 00:06:47.882 [2024-10-17 13:14:55.729204] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:06:47.882 [2024-10-17 13:14:55.729241] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:06:47.882 [2024-10-17 13:14:55.729260] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:06:47.882 #7 NEW cov: 11148 ft: 14552 corp: 3/9b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:06:47.882 [2024-10-17 13:14:55.912019] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:06:47.882 [2024-10-17 13:14:55.912043] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:06:47.882 [2024-10-17 13:14:55.912060] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:06:48.141 NEW_FUNC[1/1]: 0x1bcbbd8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:48.141 #8 NEW cov: 11165 ft: 15246 corp: 4/13b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 CopyPart- 00:06:48.141 [2024-10-17 13:14:56.101332] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:06:48.141 [2024-10-17 13:14:56.101356] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:06:48.141 [2024-10-17 13:14:56.101373] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:06:48.400 #18 NEW cov: 11165 ft: 16414 corp: 5/17b lim: 4 exec/s: 18 rss: 75Mb L: 4/4 MS: 5 PersAutoDict-EraseBytes-InsertByte-ChangeBit-CrossOver- DE: "\002\000"- 00:06:48.400 [2024-10-17 13:14:56.293567] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:06:48.400 [2024-10-17 13:14:56.293590] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:06:48.400 [2024-10-17 13:14:56.293607] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:06:48.400 #19 NEW cov: 11165 ft: 17501 corp: 6/21b lim: 4 exec/s: 19 rss: 75Mb L: 4/4 MS: 1 ChangeByte- 00:06:48.708 [2024-10-17 13:14:56.477045] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:06:48.708 [2024-10-17 13:14:56.477068] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:06:48.708 [2024-10-17 13:14:56.477085] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:06:48.708 #20 NEW cov: 11165 ft: 17587 corp: 7/25b lim: 4 exec/s: 20 rss: 75Mb L: 4/4 MS: 1 ShuffleBytes- 00:06:48.708 [2024-10-17 13:14:56.654411] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:06:48.708 [2024-10-17 13:14:56.654433] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:06:48.708 [2024-10-17 13:14:56.654449] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:06:49.057 #26 NEW cov: 11165 ft: 17753 corp: 8/29b lim: 4 exec/s: 26 rss: 75Mb L: 4/4 MS: 1 ChangeBinInt- 00:06:49.057 [2024-10-17 13:14:56.834811] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:06:49.057 [2024-10-17 13:14:56.834833] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:06:49.057 [2024-10-17 13:14:56.834850] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:06:49.057 #27 NEW cov: 11165 ft: 17905 corp: 9/33b lim: 4 exec/s: 27 rss: 75Mb L: 4/4 MS: 1 ShuffleBytes- 00:06:49.057 [2024-10-17 13:14:57.023540] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:06:49.057 [2024-10-17 13:14:57.023563] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:06:49.057 [2024-10-17 13:14:57.023581] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:06:49.367 #38 NEW cov: 11172 ft: 18082 corp: 10/37b lim: 4 exec/s: 38 rss: 75Mb L: 4/4 MS: 1 ChangeBit- 00:06:49.367 [2024-10-17 13:14:57.204060] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:06:49.367 [2024-10-17 13:14:57.204083] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:06:49.367 [2024-10-17 13:14:57.204121] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:06:49.367 #41 NEW cov: 11172 ft: 18092 corp: 11/41b lim: 4 exec/s: 20 rss: 75Mb L: 4/4 MS: 3 CrossOver-ChangeBinInt-CopyPart- 00:06:49.367 #41 DONE cov: 11172 ft: 18092 corp: 11/41b lim: 4 exec/s: 20 rss: 75Mb 00:06:49.367 ###### Recommended dictionary. ###### 00:06:49.367 "\002\000" # Uses: 1 00:06:49.367 ###### End of recommended dictionary. ###### 00:06:49.367 Done 41 runs in 2 second(s) 00:06:49.367 [2024-10-17 13:14:57.327231] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:06:49.625 13:14:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:06:49.625 13:14:57 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:49.625 13:14:57 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:49.625 13:14:57 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:49.625 13:14:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:06:49.625 13:14:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:06:49.625 13:14:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:06:49.625 13:14:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:06:49.625 13:14:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:06:49.625 13:14:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:06:49.625 13:14:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:06:49.625 13:14:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:06:49.625 13:14:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:06:49.625 13:14:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:06:49.625 13:14:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:06:49.625 13:14:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:06:49.625 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:06:49.625 13:14:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:49.625 13:14:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:06:49.625 13:14:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:06:49.625 [2024-10-17 13:14:57.596321] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:49.626 [2024-10-17 13:14:57.596412] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852099 ] 00:06:49.626 [2024-10-17 13:14:57.670374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.885 [2024-10-17 13:14:57.710600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.885 INFO: Running with entropic power schedule (0xFF, 100). 00:06:49.885 INFO: Seed: 3284715208 00:06:49.885 INFO: Loaded 1 modules (381868 inline 8-bit counters): 381868 [0x2bb2f4c, 0x2c102f8), 00:06:49.885 INFO: Loaded 1 PC tables (381868 PCs): 381868 [0x2c102f8,0x31e3db8), 00:06:49.885 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:06:49.885 INFO: A corpus is not provided, starting from an empty corpus 00:06:49.885 #2 INITED exec/s: 0 rss: 67Mb 00:06:49.885 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:49.885 This may also happen if the target rejected all inputs we tried so far 00:06:50.144 [2024-10-17 13:14:57.947803] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:06:50.144 [2024-10-17 13:14:57.971196] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:06:50.144 [2024-10-17 13:14:57.971233] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:06:50.403 NEW_FUNC[1/673]: 0x43c578 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:06:50.403 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:06:50.403 #28 NEW cov: 11126 ft: 10984 corp: 2/9b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:06:50.403 [2024-10-17 13:14:58.409204] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:06:50.662 #31 NEW cov: 11141 ft: 14580 corp: 3/17b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 3 InsertRepeatedBytes-InsertByte-InsertByte- 00:06:50.662 [2024-10-17 13:14:58.592212] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:06:50.662 [2024-10-17 13:14:58.592249] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:06:50.662 #37 NEW cov: 11141 ft: 15241 corp: 4/25b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 ShuffleBytes- 00:06:50.921 [2024-10-17 13:14:58.765683] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:06:50.921 [2024-10-17 13:14:58.765713] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:06:50.921 NEW_FUNC[1/1]: 0x1bcbbd8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:50.921 #38 NEW cov: 11158 ft: 15373 corp: 5/33b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 ChangeBinInt- 00:06:50.921 [2024-10-17 13:14:58.944844] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:06:51.181 #39 NEW cov: 11158 ft: 16240 corp: 6/41b lim: 8 exec/s: 39 rss: 77Mb L: 8/8 MS: 1 ChangeBinInt- 00:06:51.181 [2024-10-17 13:14:59.116520] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:06:51.181 #43 NEW cov: 11158 ft: 16776 corp: 7/49b lim: 8 exec/s: 43 rss: 77Mb L: 8/8 MS: 4 EraseBytes-ChangeBinInt-ChangeBit-InsertByte- 00:06:51.440 [2024-10-17 13:14:59.289506] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:06:51.440 [2024-10-17 13:14:59.289536] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:06:51.440 #44 NEW cov: 11158 ft: 16856 corp: 8/57b lim: 8 exec/s: 44 rss: 77Mb L: 8/8 MS: 1 CopyPart- 00:06:51.440 [2024-10-17 13:14:59.461636] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:06:51.440 [2024-10-17 13:14:59.461666] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:06:51.699 #45 NEW cov: 11158 ft: 17393 corp: 9/65b lim: 8 exec/s: 45 rss: 77Mb L: 8/8 MS: 1 CrossOver- 00:06:51.699 [2024-10-17 13:14:59.634851] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:06:51.699 [2024-10-17 13:14:59.634880] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:06:51.699 #46 NEW cov: 11165 ft: 17435 corp: 10/73b lim: 8 exec/s: 46 rss: 77Mb L: 8/8 MS: 1 CrossOver- 00:06:51.958 [2024-10-17 13:14:59.811023] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:06:51.958 #52 NEW cov: 11165 ft: 17460 corp: 11/81b lim: 8 exec/s: 26 rss: 77Mb L: 8/8 MS: 1 ChangeBinInt- 00:06:51.958 #52 DONE cov: 11165 ft: 17460 corp: 11/81b lim: 8 exec/s: 26 rss: 77Mb 00:06:51.958 Done 52 runs in 2 second(s) 00:06:51.958 [2024-10-17 13:14:59.936340] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:06:52.217 13:15:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:06:52.217 13:15:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:52.217 13:15:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:52.217 13:15:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:52.217 13:15:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:06:52.217 13:15:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:06:52.217 13:15:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:06:52.217 13:15:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:06:52.217 13:15:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:06:52.217 13:15:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:06:52.217 13:15:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:06:52.217 13:15:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:06:52.217 13:15:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:06:52.217 13:15:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:06:52.217 13:15:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:06:52.217 13:15:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:06:52.217 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:06:52.217 13:15:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:52.217 13:15:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:06:52.217 13:15:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:06:52.217 [2024-10-17 13:15:00.206918] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:52.217 [2024-10-17 13:15:00.206992] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852458 ] 00:06:52.476 [2024-10-17 13:15:00.282194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.476 [2024-10-17 13:15:00.323932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.476 INFO: Running with entropic power schedule (0xFF, 100). 00:06:52.476 INFO: Seed: 1609782526 00:06:52.736 INFO: Loaded 1 modules (381868 inline 8-bit counters): 381868 [0x2bb2f4c, 0x2c102f8), 00:06:52.736 INFO: Loaded 1 PC tables (381868 PCs): 381868 [0x2c102f8,0x31e3db8), 00:06:52.736 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:06:52.736 INFO: A corpus is not provided, starting from an empty corpus 00:06:52.736 #2 INITED exec/s: 0 rss: 67Mb 00:06:52.736 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:52.736 This may also happen if the target rejected all inputs we tried so far 00:06:52.736 [2024-10-17 13:15:00.574699] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:06:52.736 [2024-10-17 13:15:00.627267] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [(nil), (nil)) fd=300 offset=0xa00000000000000 prot=0x3: Invalid argument 00:06:52.736 [2024-10-17 13:15:00.627294] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0) offset=0xa00000000000000 flags=0x3: Invalid argument 00:06:52.736 [2024-10-17 13:15:00.627305] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:06:52.736 [2024-10-17 13:15:00.627323] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:06:52.996 NEW_FUNC[1/673]: 0x43cc68 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:06:52.996 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:06:52.996 #93 NEW cov: 11132 ft: 11098 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:06:53.255 [2024-10-17 13:15:01.093258] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0xf7000000000000, 0xf7000000000000) fd=302 offset=0xa00000000000000 prot=0x3: Invalid argument 00:06:53.255 [2024-10-17 13:15:01.093295] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xf7000000000000, 0xf7000000000000) offset=0xa00000000000000 flags=0x3: Invalid argument 00:06:53.255 [2024-10-17 13:15:01.093307] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:06:53.255 [2024-10-17 13:15:01.093324] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:06:53.255 #94 NEW cov: 11146 ft: 14405 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:06:53.255 [2024-10-17 13:15:01.263900] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0xf7000000000000, 0xf7000000000000) fd=302 offset=0xa0000002f000000 prot=0x3: Invalid argument 00:06:53.255 [2024-10-17 13:15:01.263924] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xf7000000000000, 0xf7000000000000) offset=0xa0000002f000000 flags=0x3: Invalid argument 00:06:53.255 [2024-10-17 13:15:01.263935] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:06:53.255 [2024-10-17 13:15:01.263951] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:06:53.514 NEW_FUNC[1/1]: 0x1bcbbd8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:53.514 #95 NEW cov: 11163 ft: 15168 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:06:53.514 [2024-10-17 13:15:01.436542] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0x5d000000000000, 0x5d000000000000) fd=302 offset=0xa00000000000000 prot=0x3: Invalid argument 00:06:53.514 [2024-10-17 13:15:01.436564] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0x5d000000000000, 0x5d000000000000) offset=0xa00000000000000 flags=0x3: Invalid argument 00:06:53.514 [2024-10-17 13:15:01.436575] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:06:53.514 [2024-10-17 13:15:01.436605] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:06:53.514 #96 NEW cov: 11163 ft: 15265 corp: 5/129b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:06:53.773 [2024-10-17 13:15:01.611671] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0xf7000000000000, 0xf7000000010000) fd=302 offset=0xa0000002f000000 prot=0x3: Permission denied 00:06:53.773 [2024-10-17 13:15:01.611695] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xf7000000000000, 0xf7000000010000) offset=0xa0000002f000000 flags=0x3: Permission denied 00:06:53.773 [2024-10-17 13:15:01.611706] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Permission denied 00:06:53.773 [2024-10-17 13:15:01.611722] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:06:53.773 #97 NEW cov: 11163 ft: 16622 corp: 6/161b lim: 32 exec/s: 97 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:06:53.773 [2024-10-17 13:15:01.788617] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0xf7000000000000, 0xf7000000010000) fd=302 offset=0xf700000000000000 prot=0x3: Permission denied 00:06:53.773 [2024-10-17 13:15:01.788641] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xf7000000000000, 0xf7000000010000) offset=0xf700000000000000 flags=0x3: Permission denied 00:06:53.773 [2024-10-17 13:15:01.788652] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Permission denied 00:06:53.773 [2024-10-17 13:15:01.788672] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:06:54.031 #103 NEW cov: 11163 ft: 17031 corp: 7/193b lim: 32 exec/s: 103 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:06:54.031 [2024-10-17 13:15:01.966838] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0xf7000000000000, 0xf7000000010000) fd=302 offset=0xa0000002f000000 prot=0x3: Permission denied 00:06:54.031 [2024-10-17 13:15:01.966861] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xf7000000000000, 0xf7000000010000) offset=0xa0000002f000000 flags=0x3: Permission denied 00:06:54.031 [2024-10-17 13:15:01.966871] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Permission denied 00:06:54.031 [2024-10-17 13:15:01.966889] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:06:54.031 #109 NEW cov: 11163 ft: 17347 corp: 8/225b lim: 32 exec/s: 109 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:06:54.290 [2024-10-17 13:15:02.141540] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [(nil), (nil)) fd=302 offset=0xa00000000000000 prot=0x3: Invalid argument 00:06:54.290 [2024-10-17 13:15:02.141564] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0) offset=0xa00000000000000 flags=0x3: Invalid argument 00:06:54.290 [2024-10-17 13:15:02.141574] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:06:54.290 [2024-10-17 13:15:02.141590] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:06:54.290 #110 NEW cov: 11163 ft: 17477 corp: 9/257b lim: 32 exec/s: 110 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:06:54.290 [2024-10-17 13:15:02.312222] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [(nil), (nil)) fd=302 offset=0xa00000000000000 prot=0x3: Invalid argument 00:06:54.290 [2024-10-17 13:15:02.312245] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0) offset=0xa00000000000000 flags=0x3: Invalid argument 00:06:54.290 [2024-10-17 13:15:02.312256] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:06:54.290 [2024-10-17 13:15:02.312272] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:06:54.550 #111 NEW cov: 11170 ft: 17842 corp: 10/289b lim: 32 exec/s: 111 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:06:54.550 [2024-10-17 13:15:02.485208] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0xf7000000000000, 0xf7000000000000) fd=302 offset=0xa00000000000000 prot=0x3: Invalid argument 00:06:54.550 [2024-10-17 13:15:02.485231] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xf7000000000000, 0xf7000000000000) offset=0xa00000000000000 flags=0x3: Invalid argument 00:06:54.550 [2024-10-17 13:15:02.485242] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:06:54.550 [2024-10-17 13:15:02.485257] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:06:54.550 #112 NEW cov: 11170 ft: 17877 corp: 11/321b lim: 32 exec/s: 56 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:06:54.550 #112 DONE cov: 11170 ft: 17877 corp: 11/321b lim: 32 exec/s: 56 rss: 75Mb 00:06:54.550 Done 112 runs in 2 second(s) 00:06:54.809 [2024-10-17 13:15:02.604347] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:06:54.809 13:15:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:06:54.809 13:15:02 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:54.809 13:15:02 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:54.809 13:15:02 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:54.809 13:15:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:06:54.809 13:15:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:06:54.809 13:15:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:06:54.809 13:15:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:06:54.809 13:15:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:06:54.809 13:15:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:06:54.809 13:15:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:06:54.809 13:15:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:06:54.809 13:15:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:06:54.809 13:15:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:06:54.809 13:15:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:06:54.809 13:15:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:06:54.809 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:06:54.809 13:15:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:54.809 13:15:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:06:54.809 13:15:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:06:55.069 [2024-10-17 13:15:02.871304] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:55.069 [2024-10-17 13:15:02.871377] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853047 ] 00:06:55.069 [2024-10-17 13:15:02.942971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.069 [2024-10-17 13:15:02.983028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.328 INFO: Running with entropic power schedule (0xFF, 100). 00:06:55.328 INFO: Seed: 4259758616 00:06:55.328 INFO: Loaded 1 modules (381868 inline 8-bit counters): 381868 [0x2bb2f4c, 0x2c102f8), 00:06:55.328 INFO: Loaded 1 PC tables (381868 PCs): 381868 [0x2c102f8,0x31e3db8), 00:06:55.328 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:06:55.328 INFO: A corpus is not provided, starting from an empty corpus 00:06:55.328 #2 INITED exec/s: 0 rss: 67Mb 00:06:55.328 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:55.328 This may also happen if the target rejected all inputs we tried so far 00:06:55.328 [2024-10-17 13:15:03.216757] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:06:55.846 NEW_FUNC[1/667]: 0x43d4e8 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:06:55.846 NEW_FUNC[2/667]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:06:55.846 #35 NEW cov: 11071 ft: 10856 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 3 InsertRepeatedBytes-CopyPart-CopyPart- 00:06:55.846 NEW_FUNC[1/5]: 0x1591138 in spdk_nvme_opc_get_data_transfer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/nvme_spec.h:1782 00:06:55.846 NEW_FUNC[2/5]: 0x18a4d98 in nvme_payload_type /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:260 00:06:55.846 #36 NEW cov: 11138 ft: 14277 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeByte- 00:06:56.105 NEW_FUNC[1/1]: 0x1bcbbd8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:56.105 #37 NEW cov: 11158 ft: 15339 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:06:56.364 #38 NEW cov: 11158 ft: 16140 corp: 5/129b lim: 32 exec/s: 38 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:06:56.364 #39 NEW cov: 11158 ft: 16701 corp: 6/161b lim: 32 exec/s: 39 rss: 74Mb L: 32/32 MS: 1 CMP- DE: "\326\265\035\005\000\000\000\000"- 00:06:56.623 #40 NEW cov: 11158 ft: 16854 corp: 7/193b lim: 32 exec/s: 40 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:06:56.882 #46 NEW cov: 11158 ft: 17157 corp: 8/225b lim: 32 exec/s: 46 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:06:57.141 #47 NEW cov: 11158 ft: 17203 corp: 9/257b lim: 32 exec/s: 47 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:06:57.141 #48 NEW cov: 11165 ft: 17223 corp: 10/289b lim: 32 exec/s: 48 rss: 75Mb L: 32/32 MS: 1 PersAutoDict- DE: "\326\265\035\005\000\000\000\000"- 00:06:57.401 #50 NEW cov: 11165 ft: 17246 corp: 11/321b lim: 32 exec/s: 25 rss: 75Mb L: 32/32 MS: 2 EraseBytes-InsertByte- 00:06:57.401 #50 DONE cov: 11165 ft: 17246 corp: 11/321b lim: 32 exec/s: 25 rss: 75Mb 00:06:57.401 ###### Recommended dictionary. ###### 00:06:57.401 "\326\265\035\005\000\000\000\000" # Uses: 1 00:06:57.401 ###### End of recommended dictionary. ###### 00:06:57.401 Done 50 runs in 2 second(s) 00:06:57.401 [2024-10-17 13:15:05.318348] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:06:57.661 13:15:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:06:57.661 13:15:05 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:57.661 13:15:05 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:57.661 13:15:05 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:57.661 13:15:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:06:57.661 13:15:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:06:57.661 13:15:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:06:57.661 13:15:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:06:57.661 13:15:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:06:57.661 13:15:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:06:57.661 13:15:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:06:57.661 13:15:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:06:57.661 13:15:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:06:57.661 13:15:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:06:57.661 13:15:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:06:57.661 13:15:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:06:57.661 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:06:57.661 13:15:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:57.661 13:15:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:06:57.661 13:15:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:06:57.661 [2024-10-17 13:15:05.586089] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:06:57.661 [2024-10-17 13:15:05.586187] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853986 ] 00:06:57.661 [2024-10-17 13:15:05.659243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.661 [2024-10-17 13:15:05.698934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.920 INFO: Running with entropic power schedule (0xFF, 100). 00:06:57.920 INFO: Seed: 2678858709 00:06:57.920 INFO: Loaded 1 modules (381868 inline 8-bit counters): 381868 [0x2bb2f4c, 0x2c102f8), 00:06:57.920 INFO: Loaded 1 PC tables (381868 PCs): 381868 [0x2c102f8,0x31e3db8), 00:06:57.920 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:06:57.920 INFO: A corpus is not provided, starting from an empty corpus 00:06:57.920 #2 INITED exec/s: 0 rss: 68Mb 00:06:57.920 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:57.920 This may also happen if the target rejected all inputs we tried so far 00:06:57.921 [2024-10-17 13:15:05.930465] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:06:58.180 [2024-10-17 13:15:05.978216] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:06:58.180 [2024-10-17 13:15:05.978253] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:06:58.439 NEW_FUNC[1/673]: 0x43dee8 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:06:58.439 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:06:58.439 #29 NEW cov: 11131 ft: 11078 corp: 2/14b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 2 InsertRepeatedBytes-InsertByte- 00:06:58.439 [2024-10-17 13:15:06.446968] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:06:58.439 [2024-10-17 13:15:06.447011] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:06:58.699 #30 NEW cov: 11146 ft: 13984 corp: 3/27b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 CMP- DE: "\020\000"- 00:06:58.699 [2024-10-17 13:15:06.632984] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:06:58.699 [2024-10-17 13:15:06.633017] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:06:58.699 NEW_FUNC[1/1]: 0x1bcbbd8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:58.699 #31 NEW cov: 11163 ft: 14655 corp: 4/40b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ChangeBinInt- 00:06:58.958 [2024-10-17 13:15:06.820696] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:06:58.958 [2024-10-17 13:15:06.820726] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:06:58.958 #32 NEW cov: 11163 ft: 15931 corp: 5/53b lim: 13 exec/s: 32 rss: 75Mb L: 13/13 MS: 1 ChangeBinInt- 00:06:59.218 [2024-10-17 13:15:07.010938] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:06:59.218 [2024-10-17 13:15:07.010969] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:06:59.218 #38 NEW cov: 11163 ft: 16100 corp: 6/66b lim: 13 exec/s: 38 rss: 75Mb L: 13/13 MS: 1 ChangeByte- 00:06:59.218 [2024-10-17 13:15:07.196681] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:06:59.218 [2024-10-17 13:15:07.196711] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:06:59.476 #44 NEW cov: 11163 ft: 16164 corp: 7/79b lim: 13 exec/s: 44 rss: 75Mb L: 13/13 MS: 1 CopyPart- 00:06:59.476 [2024-10-17 13:15:07.379727] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:06:59.476 [2024-10-17 13:15:07.379756] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:06:59.476 #45 NEW cov: 11163 ft: 16173 corp: 8/92b lim: 13 exec/s: 45 rss: 75Mb L: 13/13 MS: 1 ChangeBit- 00:06:59.736 [2024-10-17 13:15:07.562611] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:06:59.736 [2024-10-17 13:15:07.562640] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:06:59.736 #46 NEW cov: 11163 ft: 16230 corp: 9/105b lim: 13 exec/s: 46 rss: 75Mb L: 13/13 MS: 1 ChangeBit- 00:06:59.736 [2024-10-17 13:15:07.743809] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:06:59.736 [2024-10-17 13:15:07.743844] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:06:59.995 #47 NEW cov: 11170 ft: 16275 corp: 10/118b lim: 13 exec/s: 47 rss: 75Mb L: 13/13 MS: 1 CrossOver- 00:06:59.995 [2024-10-17 13:15:07.927074] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:06:59.995 [2024-10-17 13:15:07.927106] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:06:59.995 #48 NEW cov: 11170 ft: 16332 corp: 11/131b lim: 13 exec/s: 24 rss: 75Mb L: 13/13 MS: 1 CrossOver- 00:06:59.995 #48 DONE cov: 11170 ft: 16332 corp: 11/131b lim: 13 exec/s: 24 rss: 75Mb 00:06:59.995 ###### Recommended dictionary. ###### 00:06:59.995 "\020\000" # Uses: 1 00:06:59.995 ###### End of recommended dictionary. ###### 00:06:59.995 Done 48 runs in 2 second(s) 00:07:00.255 [2024-10-17 13:15:08.058343] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:07:00.255 13:15:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:07:00.255 13:15:08 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:00.255 13:15:08 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:00.255 13:15:08 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:00.255 13:15:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:07:00.255 13:15:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:00.255 13:15:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:00.255 13:15:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:00.255 13:15:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:07:00.255 13:15:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:07:00.255 13:15:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:07:00.255 13:15:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:07:00.255 13:15:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:00.255 13:15:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:00.255 13:15:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:00.255 13:15:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:07:00.255 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:00.255 13:15:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:00.255 13:15:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:00.255 13:15:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:07:00.515 [2024-10-17 13:15:08.320898] Starting SPDK v25.01-pre git sha1 cca20a51a / DPDK 24.03.0 initialization... 00:07:00.515 [2024-10-17 13:15:08.320972] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854550 ] 00:07:00.515 [2024-10-17 13:15:08.392311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.515 [2024-10-17 13:15:08.431911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.775 INFO: Running with entropic power schedule (0xFF, 100). 00:07:00.775 INFO: Seed: 1120814346 00:07:00.775 INFO: Loaded 1 modules (381868 inline 8-bit counters): 381868 [0x2bb2f4c, 0x2c102f8), 00:07:00.775 INFO: Loaded 1 PC tables (381868 PCs): 381868 [0x2c102f8,0x31e3db8), 00:07:00.775 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:00.775 INFO: A corpus is not provided, starting from an empty corpus 00:07:00.775 #2 INITED exec/s: 0 rss: 67Mb 00:07:00.775 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:00.775 This may also happen if the target rejected all inputs we tried so far 00:07:00.775 [2024-10-17 13:15:08.668358] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:07:00.775 [2024-10-17 13:15:08.692208] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:00.775 [2024-10-17 13:15:08.692244] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:01.034 NEW_FUNC[1/673]: 0x43ebd8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:07:01.034 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:01.034 #18 NEW cov: 11128 ft: 11059 corp: 2/10b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:01.293 [2024-10-17 13:15:09.101159] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:01.293 [2024-10-17 13:15:09.101200] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:01.293 #24 NEW cov: 11142 ft: 14749 corp: 3/19b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:07:01.293 [2024-10-17 13:15:09.222957] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:01.293 [2024-10-17 13:15:09.222993] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:01.293 #35 NEW cov: 11142 ft: 14896 corp: 4/28b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:01.561 [2024-10-17 13:15:09.346031] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:01.561 [2024-10-17 13:15:09.346067] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:01.561 #36 NEW cov: 11142 ft: 15462 corp: 5/37b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:01.561 [2024-10-17 13:15:09.458069] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:01.561 [2024-10-17 13:15:09.458104] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:01.561 NEW_FUNC[1/1]: 0x1bcbbd8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:01.561 #37 NEW cov: 11159 ft: 16435 corp: 6/46b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 ChangeBit- 00:07:01.561 [2024-10-17 13:15:09.579122] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:01.561 [2024-10-17 13:15:09.579158] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:01.830 #38 NEW cov: 11159 ft: 16743 corp: 7/55b lim: 9 exec/s: 38 rss: 75Mb L: 9/9 MS: 1 ChangeByte- 00:07:01.830 [2024-10-17 13:15:09.702243] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:01.830 [2024-10-17 13:15:09.702279] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:01.830 #44 NEW cov: 11159 ft: 16756 corp: 8/64b lim: 9 exec/s: 44 rss: 75Mb L: 9/9 MS: 1 CrossOver- 00:07:01.830 [2024-10-17 13:15:09.824251] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:01.830 [2024-10-17 13:15:09.824286] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:02.090 #45 NEW cov: 11159 ft: 17082 corp: 9/73b lim: 9 exec/s: 45 rss: 75Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:02.090 [2024-10-17 13:15:09.945277] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:02.090 [2024-10-17 13:15:09.945310] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:02.090 #46 NEW cov: 11159 ft: 17194 corp: 10/82b lim: 9 exec/s: 46 rss: 76Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:02.090 [2024-10-17 13:15:10.068346] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:02.090 [2024-10-17 13:15:10.068391] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:02.349 #49 NEW cov: 11159 ft: 17221 corp: 11/91b lim: 9 exec/s: 49 rss: 76Mb L: 9/9 MS: 3 EraseBytes-ChangeByte-CrossOver- 00:07:02.349 [2024-10-17 13:15:10.192434] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:02.349 [2024-10-17 13:15:10.192473] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:02.349 #50 NEW cov: 11159 ft: 17251 corp: 12/100b lim: 9 exec/s: 50 rss: 76Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:02.349 [2024-10-17 13:15:10.305604] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:02.350 [2024-10-17 13:15:10.305639] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:02.350 #51 NEW cov: 11159 ft: 17396 corp: 13/109b lim: 9 exec/s: 51 rss: 76Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:02.609 [2024-10-17 13:15:10.428702] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:02.609 [2024-10-17 13:15:10.428751] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:02.609 #57 NEW cov: 11166 ft: 17556 corp: 14/118b lim: 9 exec/s: 57 rss: 76Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:02.609 [2024-10-17 13:15:10.551880] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:02.609 [2024-10-17 13:15:10.551913] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:02.609 #63 NEW cov: 11166 ft: 17560 corp: 15/127b lim: 9 exec/s: 63 rss: 76Mb L: 9/9 MS: 1 ChangeBit- 00:07:02.868 [2024-10-17 13:15:10.674048] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:02.868 [2024-10-17 13:15:10.674082] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:02.868 #69 NEW cov: 11166 ft: 17771 corp: 16/136b lim: 9 exec/s: 34 rss: 76Mb L: 9/9 MS: 1 ChangeByte- 00:07:02.868 #69 DONE cov: 11166 ft: 17771 corp: 16/136b lim: 9 exec/s: 34 rss: 76Mb 00:07:02.868 Done 69 runs in 2 second(s) 00:07:02.868 [2024-10-17 13:15:10.766351] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:07:03.127 13:15:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:07:03.127 13:15:10 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:03.127 13:15:10 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:03.127 13:15:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:07:03.127 00:07:03.127 real 0m19.400s 00:07:03.127 user 0m27.318s 00:07:03.127 sys 0m1.808s 00:07:03.127 13:15:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.127 13:15:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:03.127 ************************************ 00:07:03.127 END TEST vfio_llvm_fuzz 00:07:03.127 ************************************ 00:07:03.127 00:07:03.127 real 1m22.787s 00:07:03.127 user 2m7.200s 00:07:03.127 sys 0m9.095s 00:07:03.127 13:15:11 llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.127 13:15:11 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:03.127 ************************************ 00:07:03.127 END TEST llvm_fuzz 00:07:03.127 ************************************ 00:07:03.127 13:15:11 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:07:03.127 13:15:11 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:07:03.127 13:15:11 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:07:03.127 13:15:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:03.127 13:15:11 -- common/autotest_common.sh@10 -- # set +x 00:07:03.127 13:15:11 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:07:03.127 13:15:11 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:07:03.127 13:15:11 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:07:03.127 13:15:11 -- common/autotest_common.sh@10 -- # set +x 00:07:09.702 INFO: APP EXITING 00:07:09.702 INFO: killing all VMs 00:07:09.702 INFO: killing vhost app 00:07:09.702 INFO: EXIT DONE 00:07:12.238 Waiting for block devices as requested 00:07:12.238 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:12.238 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:12.238 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:12.238 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:12.238 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:12.238 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:12.238 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:12.238 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:12.496 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:12.496 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:12.496 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:12.755 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:12.755 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:12.755 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:13.014 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:13.014 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:13.014 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:07:16.305 Cleaning 00:07:16.305 Removing: /dev/shm/spdk_tgt_trace.pid3826594 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3824125 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3825273 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3826594 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3827049 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3828132 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3828169 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3829267 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3829276 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3829707 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3830035 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3830358 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3830694 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3830804 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3831060 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3831346 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3831662 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3832514 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3835564 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3835715 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3836016 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3836053 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3836588 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3836729 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3837156 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3837195 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3837555 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3837713 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3837822 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3837997 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3838402 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3838685 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3838967 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3839140 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3839806 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3840279 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3840625 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3841154 00:07:16.305 Removing: /var/run/dpdk/spdk_pid3841539 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3841977 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3842515 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3842812 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3843330 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3843862 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3844151 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3844689 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3845127 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3845508 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3846037 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3846412 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3846863 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3847392 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3847692 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3848212 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3848685 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3849038 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3849572 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3849987 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3850383 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3851015 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3851553 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3852099 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3852458 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3853047 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3853986 00:07:16.564 Removing: /var/run/dpdk/spdk_pid3854550 00:07:16.564 Clean 00:07:16.564 13:15:24 -- common/autotest_common.sh@1451 -- # return 0 00:07:16.564 13:15:24 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:07:16.564 13:15:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:16.564 13:15:24 -- common/autotest_common.sh@10 -- # set +x 00:07:16.824 13:15:24 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:07:16.824 13:15:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:16.824 13:15:24 -- common/autotest_common.sh@10 -- # set +x 00:07:16.824 13:15:24 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:07:16.824 13:15:24 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:07:16.824 13:15:24 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:07:16.824 13:15:24 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:07:16.824 13:15:24 -- spdk/autotest.sh@394 -- # hostname 00:07:16.824 13:15:24 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-20 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:07:17.083 geninfo: WARNING: invalid characters removed from testname! 00:07:23.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:07:24.598 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:07:29.878 13:15:36 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:07:38.001 13:15:44 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:07:42.189 13:15:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:07:47.524 13:15:54 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:07:52.802 13:16:00 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:07:58.067 13:16:05 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:03.338 13:16:10 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:08:03.338 13:16:10 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:08:03.338 13:16:10 -- common/autotest_common.sh@1691 -- $ lcov --version 00:08:03.338 13:16:10 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:08:03.338 13:16:10 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:08:03.338 13:16:10 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:08:03.338 13:16:10 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:08:03.338 13:16:10 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:08:03.338 13:16:10 -- scripts/common.sh@336 -- $ IFS=.-: 00:08:03.338 13:16:10 -- scripts/common.sh@336 -- $ read -ra ver1 00:08:03.338 13:16:10 -- scripts/common.sh@337 -- $ IFS=.-: 00:08:03.338 13:16:10 -- scripts/common.sh@337 -- $ read -ra ver2 00:08:03.338 13:16:10 -- scripts/common.sh@338 -- $ local 'op=<' 00:08:03.338 13:16:10 -- scripts/common.sh@340 -- $ ver1_l=2 00:08:03.338 13:16:10 -- scripts/common.sh@341 -- $ ver2_l=1 00:08:03.338 13:16:10 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:08:03.338 13:16:10 -- scripts/common.sh@344 -- $ case "$op" in 00:08:03.338 13:16:10 -- scripts/common.sh@345 -- $ : 1 00:08:03.338 13:16:10 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:08:03.338 13:16:10 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.338 13:16:10 -- scripts/common.sh@365 -- $ decimal 1 00:08:03.338 13:16:10 -- scripts/common.sh@353 -- $ local d=1 00:08:03.338 13:16:10 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:08:03.338 13:16:10 -- scripts/common.sh@355 -- $ echo 1 00:08:03.338 13:16:10 -- scripts/common.sh@365 -- $ ver1[v]=1 00:08:03.338 13:16:10 -- scripts/common.sh@366 -- $ decimal 2 00:08:03.338 13:16:10 -- scripts/common.sh@353 -- $ local d=2 00:08:03.338 13:16:10 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:08:03.338 13:16:10 -- scripts/common.sh@355 -- $ echo 2 00:08:03.338 13:16:10 -- scripts/common.sh@366 -- $ ver2[v]=2 00:08:03.338 13:16:10 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:08:03.338 13:16:10 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:08:03.338 13:16:10 -- scripts/common.sh@368 -- $ return 0 00:08:03.338 13:16:10 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.338 13:16:10 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:08:03.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.338 --rc genhtml_branch_coverage=1 00:08:03.338 --rc genhtml_function_coverage=1 00:08:03.338 --rc genhtml_legend=1 00:08:03.338 --rc geninfo_all_blocks=1 00:08:03.338 --rc geninfo_unexecuted_blocks=1 00:08:03.338 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:03.338 ' 00:08:03.338 13:16:10 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:08:03.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.338 --rc genhtml_branch_coverage=1 00:08:03.338 --rc genhtml_function_coverage=1 00:08:03.338 --rc genhtml_legend=1 00:08:03.338 --rc geninfo_all_blocks=1 00:08:03.338 --rc geninfo_unexecuted_blocks=1 00:08:03.338 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:03.338 ' 00:08:03.338 13:16:10 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:08:03.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.338 --rc genhtml_branch_coverage=1 00:08:03.338 --rc genhtml_function_coverage=1 00:08:03.338 --rc genhtml_legend=1 00:08:03.338 --rc geninfo_all_blocks=1 00:08:03.338 --rc geninfo_unexecuted_blocks=1 00:08:03.338 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:03.338 ' 00:08:03.338 13:16:10 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:08:03.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.338 --rc genhtml_branch_coverage=1 00:08:03.338 --rc genhtml_function_coverage=1 00:08:03.338 --rc genhtml_legend=1 00:08:03.338 --rc geninfo_all_blocks=1 00:08:03.338 --rc geninfo_unexecuted_blocks=1 00:08:03.338 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:03.338 ' 00:08:03.338 13:16:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:03.338 13:16:10 -- scripts/common.sh@15 -- $ shopt -s extglob 00:08:03.338 13:16:10 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:03.338 13:16:10 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.338 13:16:10 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.338 13:16:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.338 13:16:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.338 13:16:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.338 13:16:10 -- paths/export.sh@5 -- $ export PATH 00:08:03.338 13:16:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.338 13:16:10 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:08:03.338 13:16:10 -- common/autobuild_common.sh@486 -- $ date +%s 00:08:03.338 13:16:10 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729163770.XXXXXX 00:08:03.338 13:16:10 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729163770.R4jq9P 00:08:03.338 13:16:10 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:08:03.338 13:16:10 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:08:03.338 13:16:10 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:08:03.338 13:16:10 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:08:03.338 13:16:10 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:08:03.338 13:16:10 -- common/autobuild_common.sh@502 -- $ get_config_params 00:08:03.338 13:16:10 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:08:03.338 13:16:10 -- common/autotest_common.sh@10 -- $ set +x 00:08:03.338 13:16:10 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:08:03.338 13:16:10 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:08:03.338 13:16:10 -- pm/common@17 -- $ local monitor 00:08:03.338 13:16:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:03.338 13:16:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:03.338 13:16:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:03.338 13:16:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:03.338 13:16:10 -- pm/common@25 -- $ sleep 1 00:08:03.338 13:16:10 -- pm/common@21 -- $ date +%s 00:08:03.338 13:16:10 -- pm/common@21 -- $ date +%s 00:08:03.338 13:16:10 -- pm/common@21 -- $ date +%s 00:08:03.338 13:16:10 -- pm/common@21 -- $ date +%s 00:08:03.338 13:16:10 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729163770 00:08:03.339 13:16:10 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729163770 00:08:03.339 13:16:10 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729163770 00:08:03.339 13:16:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729163770 00:08:03.339 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729163770_collect-vmstat.pm.log 00:08:03.339 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729163770_collect-cpu-load.pm.log 00:08:03.339 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729163770_collect-cpu-temp.pm.log 00:08:03.339 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729163770_collect-bmc-pm.bmc.pm.log 00:08:03.906 13:16:11 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:08:03.906 13:16:11 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:08:03.906 13:16:11 -- spdk/autopackage.sh@14 -- $ timing_finish 00:08:03.906 13:16:11 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:08:03.906 13:16:11 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:08:03.906 13:16:11 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:03.906 13:16:11 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:08:03.906 13:16:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:03.906 13:16:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:03.906 13:16:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:03.906 13:16:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:08:03.906 13:16:11 -- pm/common@44 -- $ pid=3862693 00:08:03.906 13:16:11 -- pm/common@50 -- $ kill -TERM 3862693 00:08:03.906 13:16:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:03.906 13:16:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:08:03.906 13:16:11 -- pm/common@44 -- $ pid=3862694 00:08:03.906 13:16:11 -- pm/common@50 -- $ kill -TERM 3862694 00:08:03.906 13:16:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:03.906 13:16:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:08:03.906 13:16:11 -- pm/common@44 -- $ pid=3862696 00:08:03.906 13:16:11 -- pm/common@50 -- $ kill -TERM 3862696 00:08:03.906 13:16:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:03.906 13:16:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:08:03.906 13:16:11 -- pm/common@44 -- $ pid=3862719 00:08:03.906 13:16:11 -- pm/common@50 -- $ sudo -E kill -TERM 3862719 00:08:03.906 + [[ -n 3714179 ]] 00:08:03.906 + sudo kill 3714179 00:08:03.916 [Pipeline] } 00:08:03.931 [Pipeline] // stage 00:08:03.936 [Pipeline] } 00:08:03.952 [Pipeline] // timeout 00:08:03.957 [Pipeline] } 00:08:03.972 [Pipeline] // catchError 00:08:03.977 [Pipeline] } 00:08:03.992 [Pipeline] // wrap 00:08:03.998 [Pipeline] } 00:08:04.011 [Pipeline] // catchError 00:08:04.021 [Pipeline] stage 00:08:04.024 [Pipeline] { (Epilogue) 00:08:04.037 [Pipeline] catchError 00:08:04.039 [Pipeline] { 00:08:04.053 [Pipeline] echo 00:08:04.055 Cleanup processes 00:08:04.061 [Pipeline] sh 00:08:04.347 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:04.347 3862825 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:08:04.347 3863256 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:04.361 [Pipeline] sh 00:08:04.647 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:04.647 ++ grep -v 'sudo pgrep' 00:08:04.647 ++ awk '{print $1}' 00:08:04.647 + sudo kill -9 3862825 00:08:04.659 [Pipeline] sh 00:08:04.943 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:08:04.944 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:08:04.944 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:08:06.321 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:08:16.314 [Pipeline] sh 00:08:16.596 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:08:16.596 Artifacts sizes are good 00:08:16.610 [Pipeline] archiveArtifacts 00:08:16.618 Archiving artifacts 00:08:16.755 [Pipeline] sh 00:08:17.041 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:08:17.057 [Pipeline] cleanWs 00:08:17.068 [WS-CLEANUP] Deleting project workspace... 00:08:17.068 [WS-CLEANUP] Deferred wipeout is used... 00:08:17.075 [WS-CLEANUP] done 00:08:17.077 [Pipeline] } 00:08:17.098 [Pipeline] // catchError 00:08:17.111 [Pipeline] sh 00:08:17.400 + logger -p user.info -t JENKINS-CI 00:08:17.440 [Pipeline] } 00:08:17.455 [Pipeline] // stage 00:08:17.460 [Pipeline] } 00:08:17.475 [Pipeline] // node 00:08:17.481 [Pipeline] End of Pipeline 00:08:17.523 Finished: SUCCESS